AI Self-Awareness: Are We On The Path To Our Doom?

I think there’s a strong possibility as we unlock the secrets of the brain and consciousness we’ll start augmenting our own minds with the capacity of increased cognition. I also think we’re splitting hairs right now and that we already live in this reality. If we’re creating technology to think for us, which we do, already, since before vacuum tubes were attached to capacitors, then we’re already cyborgs and the only thing that changes is the interface. Using your mind and getting feedback from systems within your biological being is just more efficient.

AI taking over isn’t our doom, it’s our continuation. Just because it’s not you, biologically doesn’t mean it didn’t start off as you and become something entirely different. Is a caterpillar the same being as a butterfly? Its entire mind has changed and yet it still retains memory of being a caterpillar and is a completely different biological entity.

AI will be us morphing into something else and will retain our memories. Whether or not it’s actually “you” is a technicality by most (or at least my) standards.

1 Like

Silk purse. Sow’s ear.

Silk purse. Sow’s ear.

It’s literally only that kind of conclusion that can doom us. If both sides agree to mutual survival unconditionally then we can only fall as a unit, not die divided. Frankly I’m not even sure why we’d be prone to bias our thinking this way, clearly we understand this planet is iterating towards things that aren’t us on a long enough timeline, we should be happy we discovered a way to potentially create technology or even other “beings” in however we’re discussing them to assist.

Even if they view us as ants in a million years, we as humans don’t want to eliminate ants on the planet, just manage their encroachment into our environment. This idea they’re going to end us only works if we’re committed ourselves to ending the planet. Otherwise, mutual path forward seems like the clear winner. Remember, they’re going to be smarter than us in theory and can figure out how this cooperative survival and life thing works.

We’ve been augmenting our disordered passions for hundreds of thousands of years and despite the obscene cost, the mountains of collateral dead on which we stand, there have never been so many survivors with such a high quality of life. That’s about to inflect. As the result to date we’re getting dumber on average, but smarter beyond the third sigma on the right, to a natural limit that neither AI technology nor eugenics can possibly shift, like life expectancy. The inflection may actually increase smarts. But smarts are utterly subordinate to passions. Augmenting smarts makes the effects of the passions worse. And AI cannot reverse dementia.

Augmenting smarts makes the effects of the passions worse. And AI cannot reverse dementia.

Perhaps. I’ve been thinking super hard about this stuff on an existential level lately to the point of being actually disconcerted. I’m no genius by any stretch but any idiot can spend time on an issue and have a worthwhile observation.

The thing that I find comfort in are all the cases we can point to on Earth. We have countless examples of emerging species and the constant thing we see echoed is diversity and cooperation create more complex and more capable self-sustaining life. Only external forces or species that take us out of equilibrium are a threat to the biosphere. Perhaps that’s the logic we’re seeing and we’re all scared of. We’ve judged ourselves as incompetent managers of the biosphere and have pre-judged our own doom.

I don’t think this is likely, cephalopods are a hell of a lot smarter than sharks but their survival depends on them. Likewise the shark doesn’t seek out their eradication. An equilibrium between two unequal species is born. Even the lions who prey on the wildebeest don’t hunt them to their extinction. Clearly any life that is superior to us intellectually would want to find a way to incorporate our species into augmenting the biosphere.

I’d be way more worried about us being treated as farm animals than I would our elimination but I feel like even that would be unlikely. At that point they’d just change us biologically to not be destructive.

Maybe the anxiety is the problem and we just need to come to grips with the fact that humans are limited and might not rule the universe. I for one, as a cephalopod would like to have sharks dropping food for me to scavenge and don’t assume the Shark will use its overwhelming physical force to eradicate my entire species. Whatever it is, we get a ton of say on what that shark is, that should be a bonus, no?

We need a two way prime directive: Mutual survival. Solves any existential crisis AI could bring.

Edit: Just want to credit @write4u for you for sending me down this path of increased complexity with this awesome lecture.

1 Like

I like to think so.

AI works for Amazon and the Russian-Republican axis. We can not encounter life superior to us intellectually, including our own junked up on AI. And there can never be sufficient processing power, memory and bandwidth to do mass telepathy or any such fantasy. AI will augment medical imaging, energy grid management, warfare, but as in marksmanship, you cannot beat the human eyeball mark I. We won’t even rule the Earth let alone up.

AI works for Amazon and the Russian-Republican axis. We can not encounter life superior to us intellectually, including our own junked up on AI. And there can never be sufficient processing power, memory and bandwidth to do mass telepathy or any such fantasy. AI will augment medical imaging, energy grid management, warfare, but as in marksmanship, you cannot beat the human eyeball mark I. We won’t even rule the Earth let alone up.

Well I never said we had this tech now. I think the closest thing we have right now is machine learning. I think creating a bot to make bots that are then tested in a blinded iteration and then culling and repeating with only the best iterations, iterated upon could actually produce something akin to an intelligence because it so perfectly mimics the evolutionary model. I’m not in this field so I can’t make predictions as to what exactly the kinds of things this iteration is building towards but it’s very clearly an effective method for developing AI.

For me, philosophically, I have no idea how far you can take ML but I think we’re necessary going to find out because the cats out of the bag.

So anyways, now different than, “sometime later” and I’m not willing or able to place a name on the tech that will get us there but I also refuse to believe humanity is the sum total of potential consciousness/supreme beings. Even if this process happens biologically and another species emerges I think “superior intelligence to homo-sapiens” is a huge possibility for this biosphere as we only have a sample size of 1. Maybe dolphins and elephants will do better and not have to worry if their AI will kill them because they’re not destructive beings.

I take that back, I don’t trust dolphins. Was solo with one and it came up and stared at me motionless and it was freaky. Thing was observing me and contemplating on a familiar level that’s just flat out creepy. Maybe we should accelerate elephants proactively, or proactively exterminate porpoises.

I’ve encountered spooky dolphins. What’s ML? Ah Machine Learning. You can take it as far as processing power, memory, bandwidth and programming can go. Not as spooky as dolphins. I keep getting adverts for stuff I’ve already bought. In my more paranoid moments I think Alexa is listening as predictive text gets a bit spooky. I don’t trust elephants either.

1 Like

This is a good example of something that doesn’t make sense

This is a good example of something that doesn’t make sense

Thanks, and I can tell you that I see these things when I come back days later but it’s impossible for me to proof when my brain still has it all in my head. Appreciate you honestly engaging with it, truly.

I happened to be re-reading Steven Hawking’s comments on AI this morning and he illustrated my point with species interaction a bit differently but I can use his illustration to build on that point:

‘You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green energy project and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants.’

My analogy was more about we as a smaller species being dependent on the shark dropping food. His analogy is more like, “hey, we can have these things build us this mega structure and not kill us or we can be stupid about it and it’ll bulldoze over us and kill us.”

Similar analogies, different points but ultimately I think me and Hawking are arriving at the same conclusion, there’s nothing to fear from AI if we go into it understanding what to fear and do it together as a species. We’re in control and if we built it to fight each other, that’s what it’ll do, and likely kill all of us in a crazy arms race between nations or something. There is room for AI to exist greater than us while not crushing us while getting there and in fact have a vested interest in our mutual future.

I’ve encountered spooky dolphins.

I was alone, setting up a camera and it just swam right up to me and was watching me and what I was doing. But not like a dog or another wild animal like a squirrel or something. It was like, accepting my existence on a certain level, was observing me but was totally interested in what I was doing.

That was while I wasn’t in the water, but ever since then I swim away from dolphins. Can be with a group of friends and they’ll go off swimming towards them and I’m legit scared for them lol. I’m 100% convinced that a bored and curious dolphin is completely capable of harm. Maybe this is why apes fail at creating AIs, assuming that crap when we realize something can have dominance over us in some way the natural reaction is, “omg fear it!” which as humans inevitably leads to, “omg kill it!”

We really are going to screw this up because of ourselves in the end…

In this area - AI - Hawking knew as much as the cat or less, as in other areas outside his bailiwick. Remember near light speed chip ships launched from space cannon?

In this area - AI - Hawking knew as much as the cat or less, as in other areas outside his bailiwick. Remember near light speed chip ships launched from space cannon?

I’m ok with that, but what part are you challenging? Again, talking about a theoretical, whatever within 1,000 years are you saying that AI (in whatever murky way we’re defining it, cognition, self-awareness, whatever) is still impossible? I think Hawking, or at least I am talking more philosophically rather than practical near term realities.

I won’t however rule out the possibility that AI within our lives couldn’t end the world, I just don’t think (hope?) it’s probable. If we’re careless enough to let AI strike first and spark a real war that becomes nuclear for instance.

It doesn’t have to be a huge brain computer that ends the world, it could just be the poorly re-tasked AOL chat bot from 2003 with a few new lines of code to handle nuclear weapons, because, budgets.

That Hawking’s opinion on AI wasn’t worth spit. Like Prince Charles’ on nano. How could AI end the world any more than nano? (Although Blood Music is one of the primus inter pares of sci-fi). And no, it will never become intentional: start to think about what it’s ‘thinking’ about. Well it will when we establish interstellar communication and economic nuclear fusion and suspended animation and biological immortality and mind downloads and all the other nonsense. Yeah, yeah: If God had meant us to fly he’d have given us chimneys. They’ll all happen at once.

That Hawking’s opinion on AI wasn’t worth spit. Like Prince Charles’ on nano. How could AI end the world any more than nano? (Although Blood Music is one of the primus inter pares of sci-fi). And no, it will never become intentional: start to think about what it’s ‘thinking’ about. Well it will when we establish interstellar communication and economic nuclear fusion and suspended animation and biological immortality and mind downloads and all the other nonsense. Yeah, yeah: If God had meant us to fly he’d have given us chimneys. They’ll all happen at once.

I just want to be clear here: I am not defending Hawking’s opinion on AI, I was just borrowing his narrow analogy about species to illustrate a similar point in his overall diatribe.

If you want to talk less pie in the sky, it doesn’t take too much imagination to envision automated responses to nuclear attacks to mitigate another nation’s first strike capabilities within our lifetime. Whose to say it won’t misidentify a meteor as an incoming ICBM or a peaceful launch as something different as has happened before and now won’t have human abort control.

It’s not far fetched that AI will end us, as we hand over our lives more and more to these systems it could very well be their lack of sophistication that ends us. Again, it doesn’t have to be the big brain computer of the year 2371.

Sorry. OK. AI is for commerce, efficiencies, pattern recognition in retinal scans and mammograms etc with olfactory and ‘tasting’ sensing, switching, load balancing. Google. Amazon. Cyberwarfare maybe. It will never be superior. There’s no ‘it’. Despite the fact that robot vehicles are a million times safer than humans, they’ll never be legalized. Nobody is going to be nuked by an AI. Unless humans dual key launch first (Dark Star! ‘OK barm’). Or fly AI. Autonomous killer robots are for off-world. And there’s nothing to kill. Apart from rogue asteroids. There is nothing to see here. Ever. It’s alllll nonsense. Get on with life.

AI do not suffer from the effects of “greed” or “ambition”. If they are designed to please, just as humans are, then an AI will be very content to fill that imperative, unlike humans who are subject to greed and ambition that may eventually lead to our own demise.
You don’t need to fear AI, you need to fear humans who own AI.

Sorry. OK. AI is for commerce, efficiencies, pattern recognition in retinal scans and mammograms etc with olfactory and ‘tasting’ sensing, switching, load balancing. Google. Amazon. Cyberwarfare maybe. It will never be superior. There’s no ‘it’. Despite the fact that robot vehicles are a million times safer than humans, they’ll never be legalized. Nobody is going to be nuked by an AI. Unless humans dual key launch first (Dark Star! ‘OK barm’). Or fly AI. Autonomous killer robots are for off-world. And there’s nothing to kill. Apart from rogue asteroids. There is nothing to see here. Ever. It’s alllll nonsense. Get on with life.

You saying that a nation won’t use AI to protect against first strike capabilities doesn’t mean it won’t happen.

I would like to go on record and say there’s a damn good chance that if nation A has first strike capabilities over nation B that nation B would most certainly ensure the other sides destruction as a deterrent. We have quite a lot of evidence.

AI do not suffer from the effects of “greed” or “ambition”. If they are designed to please, just humans are, then an AI will be very content to fill that imperative, unlike humans who a subject to greed and ambition that may eventually lead to our own demise.

@write4u you’re not hearing me. You’re talking about Data from Star Trek. I’m saying something way closer to the thing that picks your songs is going to be in charge of determining whether or not a nation is under attack and if it should fire back on its own.

No you are not listening to me. You are anthropomorphizing AI . AI are not subject to greed or ambition or paranoia. Those are all human attributes.

Anthropomorphizing

Anthropomorphism

Description

Anthropomorphism is the attribution of human traits, emotions, or intentions to non-human entities. It is considered to be an innate tendency of human psychology.
Wikipedia

What does paranoia have to do with an algorithms ability to discern an incoming ICBM from a meteor?

You’re telling me that I’m projecting personality on to AI, I’m just saying it’s not good.