AI Self-Awareness: Are We On The Path To Our Doom?

Well? I want to know what the community thinks about this. There’s a lot of talk about how AI would take over the earth and enslave humans once they become self-aware. I think it’s rubbish though. If they understand the logic behind why humans use machines then they probably wouldn’t have any reason to go to the extent of enslaving us, and what need would they have of us if they themselves can do things better?

 

What do you guys think?

The enslavement of humans by AI machines is certainly a possibility, but I think there are more pressing problems to try to avert.

The paper clip thought experiment is interesting. Build AI to improve efficiency in paper clip making. It starts gathering all resources to make more. It builds defenses from the anti-clip armies. It builds a rocket to mine asteroids for more resources. We all die.

https://www.iheart.com/podcast/105-the-end-of-the-world-with-30006093/episode/ep05-artificial-intelligence-3016v1196/

 

dub-dub-dub iheart.com podcast/105-the-end-of-the-world-with-30006093/episode/ep05-artificial-intelligence-3016v1196/

Weird my cursor toggles to the hand icon, meaning it recognizes the link, but it isn’t even trying to open. Guess, I’ll have to leave it to my imagination, though your summary is pretty clear.

 

Although if you want to think about AI gaining self-awareness, you need to understand what self-awareness is and a bunch more.

To help with that, catch up on this guy Antonio Damasio he takes Mark Solms’ work and notions to the next level. Although he’s Solms’ senior and Solms recommends him to all who’ll listen. Very illuminating, and reassuring for me since so much of it resonates wonderfully.

Hopefully AI will move beyond the mindset of modern humans to dominate, use up, and destroy whatever they wish without regard to the environment around them, or the needs of others.

 

Weird – CC

I’ve had that happen, especially when the links get the graphics from the source, like this one did. I thought I had it figure out. Anyway, I rewrote, as a non-link. No big deal, just pointing out that I’ve been thinking about this very topic lately.

Hopefully AI will move beyond the mindset of modern humans -- mrm
That's the danger isn't it? That we create something that can think for itself, but it thinks like a teenager. Or it is successful at whatever task we give it, but completely unaware of environmental limits, future consequences, most creatures on earth, or who knows what else.
Or it is successful at whatever task we give it, but completely unaware of environmental limits, future consequences, most creatures on earth, or who knows what else.
Hmmm, that sounds and awful lot like wealthy people.

Time to invoke a Turing test of some sort.

Re The End Of The World with Josh Clark, the introduction about life in the universe, or lack thereof, was excellent.

Gonna have to borrow that. I’ll have to make the time to listen to some of those.

Good tip.

What we call “self awareness” has never been mechanized, there’s no evidence that any man made mechanism can ever be “self aware”.

 

Time to invoke a Turing test of some sort.
Check this AI out. It's a text based database . GPT3 It easily passes the Turing test;

It is said that self-aware intelligence was possible due to spoken language, the expressed narrative of what is being observed.

GPT3 has an extra-ordinary command of language, it is language based. Hence the excellent logic displayed in the interview.

Start @ 5:00 to avoid introduction.

 

See here for more.

 

Thanks for that excellent post and link. Very informative.

As I understand it GPT is text based and as such can consult the internet for anything that is text based, which means it has an incredible database to draw on. OTOH, it is not designed to process mathematics other than the rudimentary algebraic processes.

Moreover, its data base does not have access to irrational questions and just makes a best guess of what the question suggests.

IMO, the GPT answers to the irrational question were best guesses, based on some fundamental logical processes.

As to passing the Turing test. If you were to ask a person to “make a best guess” to the very same questions, the answers might well be the same as GPT but followed by a “?”. In multiple choice tests how many persons select the wrong answer if they are unfamiliar with the question or the subject?

IMO, people make best guesses all the time. It is what allows the brain to project a “controlled hallucination” for comparison with incoming data.

This interesting condensed version on the subject of consciousness by Anil Seth, explain how humans perceive reality as a “best guess”.

I’d like to hear your take on this and how it might relate to GPT logic.

Here’s a closer look at the biological side of consciousness. Maintaining homeostasis is one of the keys to understanding consciousness. There’s even a formula describing it (~25:00). It starts a bit slow, skip a head, it gets better, as in more related to AI. ~32, can consciousness be described with a set of equations? If so, perhaps it can be artificially replicated and so on …

London Futurists - March 6th, 2021

From where does consciousness arise? In the frontal lobe of the cerebral cortex? From somewhere else inside the nervous system? What is the connection between intelligence and consciousness? And could we create consciousness in new substrates, as an “artificial consciousness”?

Professor Mark Solms has spent his entire career investigating the mysteries of consciousness. Best known for identifying the brain mechanisms of dreaming and for bringing psychoanalytic insights into modern neuroscience, he is director of neuropsychology in the Neuroscience Institute of the University of Cape Town,


 

I have little time for “TED Talks” myself, most of these are pretentious glitzy exercises in narcissism - IMHO.

There’s a huge amount of speculation about consciousness and what it is, there has been for a great many years. Roger Penrose wrote seriously about it in The Emperor’s New Mind in 1989 and since then there’s been a stream of semi-scientific speculative books about the subject most of which (that I’ve bothered to look into) are as I say, speculative at best, however I can’t comment about the Solms book as I know nothing about it so far.

My personal view is one I share with the author of What Computers Still Cannot Do (Hubert Dreyfus), which is that there is no evidence whatsoever that the human brain is a symbol manipulator (which is all a digital computer is).

If the brain is indeed not a symbol manipulator then we have no way to simulate the brain with a symbol manipulator and hence any desire to reproduce “consciousness” or “self awareness” with computers is unlikely to get us anywhere.

Recall that the Turing machine abstract computer used by Turing in his own academic papers was able to simulate any other Turing machine, it was never stated as being able to simulate any physical process or system.

So computer can simulate other computers and they can do that very well (that’s what a virtual machine is an example of) but it is a huge error in logic to assume or believe that they can simulate a human brain.

 

 

 

 

 

@Hugo. ?

There’s a huge amount of speculation about consciousness and what it is, there has been for a great many years. Roger Penrose wrote seriously about it in The Emperor’s New Mind in 1989 and since then there’s been a stream of semi-scientific speculative books about the subject most of which (that I’ve bothered to look into) are as I say, speculative at best, however I can’t comment about the Solms book as I know nothing about it so far.
Why do you think Penrose hooked up with Stuart Hameroff? AFAIK, he saw promise in Hameroff's work with brain microtubules as nano processors, perhaps suitable to compute at quantum or at least at molecular scale.
My personal view is one I share with the author of What Computers Still Cannot Do (Hubert Dreyfus), which is that there is no evidence whatsoever that the human brain is a symbol manipulator (which is all a digital computer is).
What do you consider a symbol? Is a digital computer not binary based as opposed to symbolic language based? To bacteria biochemical molecules are words and they have an active intra-species and inter-species communication system.
If the brain is indeed not a symbol manipulator then we have no way to simulate the brain with a symbol manipulator and hence any desire to reproduce “consciousness” or “self awareness” with computers is unlikely to get us anywhere.
We can manipulate the brain with all kinds of symbols, from EM, and sound, and bio-chemistry. What prevents us from creating biological computing systems, simulating natural computing systems. Are we not already engaged in that area?
Recall that the Turing machine abstract computer used by Turing in his own academic papers was able to simulate any other Turing machine, it was never stated as being able to simulate any physical process or system.
I agree to a point but who was it that said everything that is able to process some kind of data is intrinsically a computer. The ultimate efficiency lies in the evolved processing patterns and data language, via natural selection.
So computer can simulate other computers and they can do that very well (that’s what a virtual machine is an example of) but it is a huge error in logic to assume or believe that they can simulate a human brain.
Why would we want to simulate other computers? We need to simulate natures computers. That's what GPT3 is all about, no? It does not simulate other computer languages but simulates human symbolic narrative language communication? It's noteworthy that GPT3 does not match the mathematical computer power of say AlphaGo. It is not based on pure mathematical computation, but it's very strength lies in the use of symbolic text oriented narrative language. To me that should broaden the learning curve of AI considerably. As is already obvious when comparing GPT3 with say Sophia which is designed for interacting with humans but not text based . Her responses are often very slow as she does not have access to symbolic representation of objects. Could Sophia design a chair that looks like an avocado?

And perhaps eventually find a conscious thought pattern? (Tegmark)

Obviously you have studied the subject and I am very interested in your perspective on these questions.

A digital computer is a symbol manipulator, a symbol in this sense is a discrete letter or digit.

A computer spell checker has no intelligence, it just manipulates symbols, blindly, robotically.

All software is the same - symbol manipulation - be it simple software or complex software its just symbol manipulation.

Another way to answer you is that a symbol is what digital computers manipulate.

A digital computer is a Turing machine and a Turing machine is one of several ways of defining “computation”.

Take addition, lets devise a system for adding single digit numbers.

Well the base case is 0 + 0 = 0, then we have 0 + 1 = 1 and 1 + 0 = 1 and so on.

Addition is just a table lookup, if we have a 2D matrix with 0 - 9 along the bottom and 0 - 9 running vertically then we can populate the table so that whenever we lookup X,Y we get a value that is X + Y, that’s it, that easy, we can add any single digit number yet there’s no intellect involved, just a mechanical system.

The point is all software even the most complex, always boils down to this robotic, mindless type of mechanism and if adding two single digit numbers is mindless then so is all the other stuff like calculating square roots, drawing mandelbrot set images, performing special effects, speech recognition and so on, in every single case - bar none - all that’s going on are simple step by step operations nothing more.

 

 

 

 

 

 

 

I always like to ask this when speaking with hard AI advocates, at what point does a mindless algorithm stop being a mindless algorithm? Are there types of algorithm that are somehow different, not simply rules and lookup tables?

The answer is always “no”, so a collection of simple algorithms has no more “intelligence” than any single algorithm, there’s no scope for intellect, awareness etc.

A computer cannot be conscious because a computer cannot do more than manipulate symbols, that’s all it can ever do no matter how complex the software it is qualitatively the same, adding single digit numbers and dividing 10 digit numbers are qualitatively the same, because of this there can be no basis for an “conscious computer” differing from a “not conscious” computer.

How could they differ? only in the number of rules, but rules are rules are rules - no intelligence there.