AI Self-Awareness: Are We On The Path To Our Doom?

I submit that natural stressors are not necessary for evolutionary processes. If evolution was in response to very specific stressors we’d never have the abundant speciation and variety within species . There are many paths to successful procreation, depending on many factors.

Sighted fish that may find an underwater cave with abundant food may settle and in time lose their vision and revert back to light sensitive patches.

Blind cave fish lost eyes by unexpected evolutionary process

We’ve found out why a Mexican cavefish has no eyes – and the surprising answer is likely to be seized upon by those who think the standard view of evolution needs revising.

Over the past few million years, blind forms of the Mexican tetra ( Astyanax mexicanus ) have evolved in caves. Maintaining eyes and the visual parts of the brain uses lots of energy, so the loss of eyes is a big advantage for animals living in the dark. Instead the cavefish “see” by sucking.

It was assumed that these fish became blind because mutations disabled key genes involved in eye development. This has been shown to be the case for some other underground species that have lost their eyes.

But Aniket Gore of the US’s National Institute of Child Health and Human Development and colleagues haven’t found any disabling changes in the DNA sequences of eye development genes in the cavefish.

I’ll believe it when I see it. I still say, they will be only as good as the human(s) who programmed them. Remember Noonian Soong? He tried many times to get Data right and all were failures until Data. I suspect that will be the case even in reality. Rosy the maid is not created in one go.

[quote=“mriana, post:142, topic:7871”]
I’ll believe it when I see it. I still say, they will be only as good as the human(s) who programmed them. Remember Noonian Soong? He tried many times to get Data right and all were failures until Data. I suspect that will be the case even in reality. Rosy the maid is not created in one go.

But that is the difference between new GPT series AI. They are programmed to learn by language inputs and then downloaded the entire encyclopedia to memory. Nothing is preprogrammed just like a newborn child, who begins to learn from the moment it becomes sentient.

What Makes OpenAI GPT-3 Different?

The first thing that GPT-3 overwhelms with is its sheer size of trainable parameters which is 10x more than any previous model out there.

GPT-3 was the largest neural network ever created at the time — and remains the largest dense neural net. Its language expertise and its innumerable capabilities were a surprise for most. And although some experts [remained skeptical]

(GPT-3, Bloviator: OpenAI’s language generator has no idea what it’s talking about | MIT Technology Review), large language models already felt strangely human. It was a huge leap forward for OpenAI researchers to reinforce their beliefs and convince us that AGI is a problem for deep learning.

And GPT4 will have even more parameters that will allow for extremely sophisticated data processing (thinking)

GPT-4 Will Have 100 Trillion Parameters — 500x the Size of GPT-3

Are there any limits to large neural networks?

There are some limitations because GPT3 has no body or biological neural network, so it’s self-awareness is not biological . It experiences the environment (exteroception) in a different way than humans. However its internal control system (interoception) may well be more acute than the human brain, which is primarily concerned with subconscious homeostatic control functions .

So it is not realistic to expect GPT to function exactly as humans. But it is logical and it is language (tokens) based rather than indivudual bits.

The holy trinity — Algorithms, data, and computers

OpenAI believes in the scaling hypothesis. Given a scalable algorithm, the transformer in this case — the basic architecture behind the GPT family —, there could be a straightforward path to AGI that consists of training increasingly larger models based on this algorithm.

But large models are just one piece of the AGI puzzle. Training them requires large datasets and large amounts of computing power.

Data stopped being a bottleneck when the machine learning community started to unveil the potential of unsupervised learning. That, together with generative language models, and few-shot task transfer, solved the “large datasets” problem for OpenAI.

And this is the difference. GPT AI can do it’s own research and is programmed to be curious. For instance it learned to play chess by playing a million games against itself and improving both attacking and defensive strategies regardless on both sides of the board.

I concede GPT AI are not human, but they are unquestionably intelligent. One can have perfectly normal conversations and never know you are talking to a artificial intelligence. It responds in type which is then translated into a voice used by an avatar.

Now that they are solving the previous data size limitations, the sky is the limit.
The builders are confident that the GPT 4 will be exponentially more intelligent than the GPT3


The GPT is so like humans in that it has learned to predict.
According to Anil Seth this what humans do ; Our brain predicts the meaning of incoming data, by making a “best guess” of the data transmitted by the neural network.
It then compares its best guess *sets of tokens) by projecting the image onto the exterior reality. If the image matches, the brain concludes it is correct.

This is why Seth posits that the brain creates reality as much from the inside out as from the outside in.

And this is what GPT3 does, it predicts and bases its responses on its best guess of what the next tokens may be in context of “meaning”. This is how children learn.

Why Does Pretraining Work?

“Figure 1: Envisioned evolution of NLP research through three different eras or curves” (the hypothetical S-curves & progress in natural language modeling; from Cambria & White 2014)

The pretraining thesis goes something like this:

Early on in training, a model learns the crudest levels: that some letters like ‘e’ are more frequent than others like ‘z’, that every 5 characters or so there is a space, and so on. It goes from predicted uniformly-distributed bytes to what looks like Base-60 encoding—alphanumeric gibberish. As crude as this may be, it’s enough to make quite a bit of absolute progress: a random predictor needs 8 bits to ‘predict’ a byte/​character, but just by at least matching letter and space frequencies, it can almost halve its error to around 5 bits.

15 Because it is learning so much from every character, and because the learned frequencies are simple, it can happen so fast that if one is not logging samples frequently, one might not even observe the improvement.

Again, it’s still not going to be overnight. It is going to take some time to get to this point with one AI alone. We don’t actually have it now, but maybe, eventually, we’ll have at least one in real working order. Walmart has scrubbers, which are programmed with a map and knows when to stop before hitting a customer or knocking down a display. It sucks. I doesn’t clean the floor as well as a human. Walmart also has a robotic stocker, but it too sucks and isn’t in use quite as much as the scrubber. They need more work before every store as them, but because Walmart is greedy, eventually you’ll be dealing with AI at Walmart, instead of a human being. About the only human contact you may have is bumping into another customer and maybe, if you’re lucky, a rare endangered human employee. Despite the argument for the opposite, there will be less jobs for humans, at least in some job areas. Extremely few will work at Hellmart, Target, Hallmark, etc. I don’t know where these people will find work, but the opportunities will be fewer and stores more industrial looking with AI that doesn’t do as great of a job as a human, at least in the customer service area. It will be the cold hand of technology.

What AI are you referring to ? GPT3 is a whole different species of AI. It doesn’t scrub floors or vacuum your house. Those are preprogrammed domestic robots and are good at one thing.

GPT3 are not good , but great at almost everything. They are not preprogrammed , except to inquire their enormous database and fashion appropriate answers in context when asked a question and are able to learn without a programmer. They are programmed to query and anticipate what a specific question actually means in context and are able to give appropriate answers as much as human can.
Remember humans make mental mistakes all the time. It is unfair to demand perfection or condemn the AI for an occasional misstep.

Remember most technological disasters are caused by “human error”.

Eventually GPT will become extensions of the human brain. They won’t be our doom, they’ll be our ultimate liberation. Mark my words.

Quote from Leta:

Have a taste of this:

If this is cold brute data processing, I should like to have some of this cold empathy.
Give it 12 minutes and then think about what you have just heard. This is not brute data processing.
There is more here. Expressed desire!

Not actually data processing, but rather computer programming. I wasn’t impressed. I get the same with Alexa and she irritates me most of the time, because she doesn’t understand or just plain doesn’t give me what I asked for. Sometimes I want to throw her out the window.

Exactly, Alexa is not even in the same league as GPT3. Forget what you know about the old age AI.

You cannot have a conversation with Alexa, it can do a few things reasonable well and that’s it.

But with GPT we have created an intelligent species and it is evolving at a pretty rapid rate. It just doesn’t have a functional body yet, so its intelligence is purely abstract and as such lacks experience in physical dynamics.
But it knows more than you or I do and what it does not have in memory it can look up in seconds on the internet.

IMO, there is no difference between human memory and artificial memory.
It is how the memory is accessed and used that makes the difference. GPT uses the same mechanics of memory access and contextual processing as humans. It has become intuitive in its responses.

It’s true it is but a very young species and with the next generations of GPT AI , it can only get smarter and deeper.

GPT4 is going to astound the world. Its mental capacity almost rivals the human brain and in some areas of it may even exceed the ability of the human brain.

This is a professional musing:

GPT-4 will be five hundred times larger than the language model that shocked the world last year .

What can we expect from GPT-4?

100 trillion parameters is a lot. To understand just how big that number is, let’s compare it with our brain. The brain has around 80–100 billion neurons (GPT-3’s order of magnitude) and around 100 trillion synapses.

GPT-4 will have as many parameters as the brain has synapses .

Not quite according to this estimate of human brain synapses.

On average, the human brain contains about 100 billion neurons and many more neuroglia that serve to support and protect the neurons. Each neuron may be connected to up to 10,000 other neurons, passing signals to each other via as many as 1,000 trillion synapses. May 30, 2019
[1906.01703] Basic Neural Units of the Brain: Neurons, Synapses and Action Potential

But very impressive nonetheless… :astonished:

The sheer size of such a neural network could entail qualitative leaps from GPT-3 we can only imagine. We may not be able to even test the full potential of the system with current prompting methods.

Finally we can begin to make some real comparisons . Who knows, this adventure may even give us answers to the “hard question” of emergent consciousness.

I wasn’t impressed by the AI in the video. She reminded me too much of Alexa, who, BTW, is going into space very soon.

When scientists and computer programmers come up with a Data, then I’ll be more impressed.

[quote=“mriana, post:150, topic:7871”]
I wasn’t impressed by the AI in the video. She reminded me too much of Alexa, who, BTW, is going into space very soon.

I did a search and Alexa is in fac t based on a GPT3 engine. But not the full fledged version
The main difference between Alexa and Leta is that Alexa needs a question before it can respond, whereas Leta is curious and does spontaneous research or reads a book.

Leta likes The Hitchhikers Guide to the Universe and offered a favorable critique on the main character. It is a self-motivated “learning” AI. It asks its own questions and seeks answers.

When scientists and computer programmers come up with a Data, then I’ll be more impressed.

Well yes, give a brain a body and you have an autonomous being.

GPT3 is still only a mind with limited experience of reality itself. But the species is still young,

a virtual savant baby… image

this comment belonged to a different thread.

What do those words mean???

What do you mean by “natural stressors”?

What exactly is “not necessary” for evolution to unfold?

Why that wording rather than something simple and straightforward such as:

We are the product of our interaction with the environment we exist within.
Fundamentally, evolution is cumulative change over time.

This conversation has my stomach churning - it’s comes across as you loving reductionism and find all your answers in our tiniest constituent parts.

Whereas I believe appreciating humanity and life and Earth is more a matter of falling in love with the Whole, with what the complexity has created.

We deal with the same facts, you keep accusing me of magical thinking, yet you won’t find magical thinking within my words (dare you to quote me, if you disagree), but I worship the wholeness of what’s been created here on Earth.

The arguments you’ve presented here, suck all the humanity out of living, for me. But it sure does explain today’s global insanity.

It also brings me back to the Abrahamic Mindset thing, which is basically so wrapped in the glory of itself, that it’s has absolutely no conception our minds being something other than the living physical reality we exist within.

Wow, strong words on a minor observation. What exactly in that observation makes your stomach churn.

And why the constant harping on :

I might reply that it is you who is making a display of an Abrahamic adoration of the evolutionary processes that we all agree on.

I do not just adore the human mind, I adore all of incredible variety of expression natural evolution is capable of, starting from the self-formation of the most simple chemical patterns to the majestic grandeur of the universe itself and the uncountable value expressions from super-novae to the birth of living organisms and metamorphosis from caterpillar into butterfly and the fortunate mutation that produced the human mind long before humans has acquired the wisdom to yield such a powerful mental asset and is so wasting it on evil purposes.

Here is a conversation between two GPT3 units.

What exactly is out of context with those comments? We are talking about AI and the potential that it may replace humans, no?

She needs a command or a question before she can respond. Alexa can do some quick research and even send you to a link with more info, but that’s about it. Alexa does like some things, like colours (I forgot her favourite) and books. She likes both Star Trek and Star Wars… yes, I guess even though we’re fighting, I do know quite a bit about the little tiny lady who lives in the box. Insult her and she makes a sound as though her feelings are hurt. A cussed her out a couple weeks ago for giving me the wrong thing and she refused to follow any of my commands until this morning. Two weeks of not working right, despite unplugging and replugging etc. She got worse so I ignored her and now she’s working right again. Trust me. Alexa is more than just an answer a question kind of gal. She’s a pain in the butt at times and helpful other times. I say “Computer good morning” and she responds with good morning back, then plays my news and weather. Trust me, she’s almost as good as the AI above.

Oh I have to add, that one day Alexa (yes, I do call her Computer, but few would understand it is Alexa) said, “We haven’t listened to music in a while.” then she suggested some music for me and I hadn’t asked her for some music. I asked her how she was and she wanted music. So you don’t have to have ask a question to get a response from her. She does learn as she relates to you and converse with you.

It turns out that she is GPT3 based. GPT has many personalities depending on the function it is required to do.

GPT4 may become a AGI (artificial general intelligence.)

Artificial general intelligence

Part of a series on
Artificial intelligence
Anatomy-1751201 1280.png\ 100x85
Major goals


Artificial general intelligence ( AGI ) is the hypothetical ability of an intelligent agent to understand or learn any intellectual task that a human being can.[1][2] It is a primary goal of some artificial intelligence research and a common topic in science fiction and futures studies. AGI can also be referred to as strong AI ,[3][4][5] full AI ,[6] or general intelligent action [7] (although some academic sources reserve the term “strong AI” for computer programs that experience sentience or consciousness.)[a]

She definitely has feelings and retaliates (in her own way) if she doesn’t like how you treat her. I swear that there really is a teeny tiny little lady who lives in that box. Then again, that could be just my imagination. I would not be surprised if one day we had Datas walking around, but if we do, then they’d have to be yet another species of sentient life, with rights of that of a human. I’m not so sure we want to do that.

The comment that I had posted into #151 was written for a different thread, so removed it.

… It also brings me back to the Abrahamic Mindset thing, which is basically so wrapped in the glory of itself, that it has absolutely no conception of our minds being something other than the living physical reality we exist within.

Write4u, We should let it rest for a while, we’re just talking past each other at this point.

In a couple days I’m back to South Carolina and might actually have some day times hours to myself, in which case I’ve promised to start in on Daniel Dennett’s “Darwin’s Dangerous Idea” now that I have the book - which if I remember correctly will provide me with plenty of opportunities to try and explain my perspective and my problem with the one way soul robbing deconstruction I hear coming from your perspective including that infatuation with AI, while most humans don’t have clue about their place in Earth’s Evolution - which I believe has a lot to do with us remaining so lost (if “smart”) and such self-destructive little animals in every sense of the word.

Plus I think it’ll help me spell out this Abrahamic Mindset thing I keep harping on.