A baby doesn't need to figure how to be a collection of cells

This is about AI, and the question of how we program feelings, they talk about the philosophical zombie idea. Lane throws it back to Lex and he something pretty amazing to me, starting at 12:40

“Think of all the complexities that led up to the human being. The entirety of the history of four billion years. In some deep sense integrated the human being into this environment. That dance of the organism and the environment, we can see how emotions arise from that, and our emotions are deeply connected, creating a human experience. From that you mix in consciousness and the fullness of it. But, from the perspective of an intelligent organism that’s already here, like a baby, it doesn’t need to learn how to be a collection of cells or how do all the things it needs to do to do its basic function of being a baby. It learns to interact with its environment, to learn from its environment to learn how to fit in to this social society. The basic response of the baby is to cry a lot of the time, maybe convince the humans to protect it, or to discipline it, to teach it. We’ve developed a bunch of different tricks, how to get our parents to take care of us, to educate us, about the world we’ve constructed, in such a way that it’s safe enough for us to survive, yet dangerous enough to learn the valuable lessons.”

“To make an apple pie, you need to build the whole universe.”

Then he says he will leave the zombie idea to the philosophers. And he also leaves love, the definition and what happens between two human beings when there’s a magic that just grabs them like nothing else matters in the world and somehow you’ve been searching for this feeling, this person, your whole life.

He’s talking not about what the feeling is, but that it’s real. If he has that feeling for a toaster, or a dog, it’s still a real feeling. It could be anthropomorphism, but there’s some kind of emotional intelligence. But Lex says there needs to be a higher order of intelligence than something like solving a mathematical puzzle, because it’s not the same as human interaction. He notes how incredible it is that we walk, in the same way it’s amazing that we create a conversation and make it meaningful. Nick puts the fine point on it, “what you’re saying is AI cannot become meaningfully emotional until it experiences some kind of internal conflict that it is unable to reconcile. The various aspects of reality or its reality with a decision to make, and then it feels sad, necessarily, because it doesn’t know what to do.”

Nick then references “21 Lessons for the 21st Century” by Yuval Noah Harari. Yuval thinks biochemistry is an algorithm and can figure out how to write great music. Nick can’t refute it, and finds disturbing. In Yuval’s final chapter, he talks about meditating and that it may be a way out of the algorithm. Our decisions could be based on feelings, not a logic that’s algorithmic.

Then comes AI.
Interesting talk, even as a rerun.

Nick Lane really surprised me with his dismissal of “emergence” since it seems so obvious via evolution and transition of single celled functioning creatures, to complex creatures that required roads and plumbing and wiring to be created in order to carry the load of increasing size, complexity - the appearance of the notochord, in some later creatures to become a spinal cord, if that isn’t emergence, what is it?

Since, unlike philosophizers, (since I’ve come to see them more as dreamers and salesmen, than reality based investigators), I have a deep respect for Nick Lane and will have to dig deeper on exactly what he means, since I’m sure he’s got some parameters that there wasn’t time for in this talk.


Ironically, I was just listening to Dan Dennett touching on the same topic, sort of.
It’s interesting so far as it goes, seeing as AI is something I’m aware of and wary of, but don’t have any great interest in getting into those weeds.
Though it does feel to me, like the final tipping point for the self-destruction of our cozy little world and the biosphere, as we know it, has become a race between AGW and AI or humans deciding to simply blow it up. Why? Because we can.

The part that really got my attention was his critique of Descartes which seems spot on to me.

2:57 - A keen summary of Descartes’ big mistake.

2:57
The big mistake goes back to Descartes who wondered if he could trust his clear and distinct ideas, and he decided he could if if God would guarantee them, and so he tried to prove the existence of God so he could trust his clear and distinct ideas.

That’s a hopeless quest. The best we can do is gather the smartest people around we could find let them compete to find the truth and see where you find consilience. Find where you find agreement and that’s the best you can do. Well, it’s good enough it get it gets us to the Moon, etc.

A baby doesn’t need to figure how to be a collection of cells

Or another way to put that is that it’s our biological physical body that has enshrined the lessons of the past half billion, or four billion (depending on one’s parameters), within our body, where it remains relevant to our every living moment.

Our living body produces our consciousness and sense of self - this time around the wheel, before handing it all back to the body (sperm and egg), to form the seed of the next incarnation.

Besides being a step parent & Napa, I also have a biological child and grandchild, so have a personal connection to this grand scheme, as I observe echos of myself within them.