AI

Enjoy the Process

Excerpt from “Enlightenment 2.0,” a Buddhist Geeks conversation with Ben Goertzel and host Vince Horn:

I think that the idea underlying that story (Enlightenment 2.0) really came out of something that I worry about in my personal life just thinking about my own personal future. When I think about “What would I want in the future if superhuman Artificial Intelligence (AI) became possible?”

Wall-E ...I really think that the human brain architecture is limiting. So that I think if you could change your brain into a different kind of information processing system, you could achieve just better states of mind. You could feel enlightened all the time, while doing great science, while having sensory gratification, and it could be way beyond what humans can experience.

So that leads to the question of, okay, if I had the ability to change myself into some profoundly better kind of mind, would I do it all at once? Would I just flick a switch and say “Okay, change from Ben into a super mind?” Well, I wouldn't really want to do that, because that would be just too much like killing Ben, and just replacing him with the super mind. So, I get the idea that maybe I'd like to improve myself by, say twenty percent per year. So I could enjoy the process, and feel myself becoming more and more intelligent, more and more enlightened, broader and broader, and better and better.

Cylon (Battlestar Gallactica) …You think of phase transitions in physics. You have water, and you boil the water, and then it changes from a liquid into a gas, just like that. It's not like it's half liquid and half gas, right? I mean, it's like the liquid is dead, and then there's a gas.

That was the kind of theme underlining this story. There was this super-intelligent AI that people had created. The super intelligent AI, after it solved the petty little problems of world peace, and hunger, and energy for everyone, and so forth, that super-human AI set itself thinking about “Okay, how can we get rid of suffering, fundamentally?” How can we make a mind that really has a positive experience all the time, and will spread good through the world rather than spreading suffering through the world.

Then the conclusion it comes to is it is possible to have such a mind, but human beings can never grow into that, and that it, given the way that it was constructed by the humans, could never grow into that either.

So, the conclusion this AI comes to is there probably are well-structured, benevolent super minds in the universe, and in order to be sure the universe is kept peaceful and happy for them, we should all just get rid of ourselves, because we're just fundamentally screwed up, and can't even ever continuously evolve into something that's benevolently structured.

Which I don't really believe, but I think it's an interesting idea, and I wouldn't say it's impossible.

Sam Worthington as Marcus Wright (Terminator Salvation)

[So] is the AI a lunatic or does it have some profound insight that we can't appreciate? Which is a problem we're going to have broadly when we create minds better than ourselves.

Just like when my dog wants to go do something and I stop him, right? Maybe it's just because my motivational system is different than his. Like I don't care about the same things as he does. I'm not that interested in going to romp in the field like he is, and I'm just bossing him around based on my boring motivational structure.  On the other hand, sometimes I really have an insight he doesn't have, and I'm really right. He shouldn't go play in the highway, no matter how much fun it looks like.  The dog can't know, and similarly, when we have a super-human AI, we really won't be able to know. We'll have to make a gut feel decision whether to trust it or not.

Extending Our Reach

Excerpts from Ray Kurzweil’s “Introduction to 9”:

Our emotional intelligence is not just a sideshow to human intelligence, it’s the cutting edge. The ability to be funny, to get the joke, to express a loving sentiment represent the most complex things we do. But these are not mystical attributes. They are forms of intelligence that take also place in our brains. And the complexity of the design of our brains – including our emotional and moral intelligence – is a level of technology that we can master. There are only about 25 million bytes of compressed design information underlying the human brain (that’s the amount of data in the human genome for the brain’s design). That’s what accounts for our ability to create music, art and science, and to have relationships.

Mastering these capabilities is the future of AI. We will want our future AI’s to master emotional intelligence and the movie 9 shows us why. We want our future machines to be like the stitchpunk creations, not like the rampaging machines.

My view of the future is that we will work hand-in-hand with friendly machines, just as we do today. Indeed we will merge with them, and that process has already started, with machines like neural implants for Parkinson’s patients and cochlear implants for the deaf. But my vision of the future is not utopian. While I don’t foresee the end of conflict, future conflict will not simply be man-versus-machine. It will be among different groups of humans amplified in their abilities by their machines, just as we see today.

The stitchpunk creations succeed not despite their emotionalism and bickering with each other, but because of it. We will want our future machines to be emotionally, socially, and morally intelligent because we will become the machines. That is, we will become the rag dolls. We will extend our reach physically, mentally, and emotionally through our technology. This is the only way we can avoid the apocalyptic world that 9 wakes up to.