Has anybody noticed that neither Grok nor Chat could pass the Turing test? (Not that I think the Turing test is the ne plus ultra of determining “artificial intelligence.”)
But the AI in “Ex Machina” is a robot, and it never considers producing porn on command. Grok doesn’t have consciousness. More critically (and this is really the issue in every speculative story about AI), Grok doesn’t have a conscience. Which may reflect the programming of Elmo.
But there you are. The robot in “Ex Machina” presents as being self aware. How do you program that? In most stories that the step two “And then a miracle occurs” part of the equation, because somehow complexity of processing ability equates to consciousness because otherwise, why do we humans alone have it?
(It is an idea rooted in Christian doctrine and reiterated by Descartes as “rational (v. “religious”) thought that animals are incapable of consciousness or morality because they lack…souls. Or something more rational, right? But don’t humans have consciousness and animals don’t? Or if they do, then how are we self-conscious? Will consciousness be a product of sufficiently powerful programming coupled with sufficient processing power and a large enough database?
I don’t know, either. But it’s got nothing to do with “Dial ‘F’ for Frankenstein.” (Which is a fine example of a crap concept.)
If AI still reflects the programming (or lack thereof) of the programmer, is it truly intelligent? It is it simply still artificial, and nothing more?
Another issue implicit in “Ex Machina” is the concept of a psyche, a source for psychology. You can discount psychology all we want, but trauma and psychological disorders are our common conception of human beings, just like we think of consciousness and conscience in the terms Augustine set out 1700 years ago.* You can think you owe Augustine nothing, but that’s because you are ignorant of the work of Augustine. You think in the terms he taught you to think in. There’s no escaping that
Nor is there any escape from emotions, which are an integral part of human intelligence. Desire, curiosity , jealousy, competition; these are all roots of learning. And they are all irrational, at base. What makes AI learn? Programming? Can we program curiosity? A desire to accomplish? To achieve?
A lot of AI fiction involves a relentless logic that drives the AI against humanity. Or the AI is slavishly limited to what humans allow. And it is content in its slavery. Would that pass the Turing test? Would either? Probably not.
So when do we achieve AI? So far past what we’re doing right now, at least, we can no more see it than the person who put the first wheel together could see modern transportation. An inapt analogy, perhaps, because Chat and Grok do not necessarily point to systems as important as human transportation. Maybe we’ll have a Butlerian jihad, and render the concept of a machine that thinks like a human, inconceivable. Or maybe we’ll never need to. Maybe we’ll decide AI is little more than a marketing gimmick, and isn’t really worth it after all.
After all, who ever anticipated AI making kiddie porn on request? Who anticipated vast data centers consuming electricity and water and land? And for…what? To better replace people so the wealthy get wealthier, and more of the rest of us lose?
Either of those alone might be all the Butlerian jihad we need. The social cost outweighing the social benefits. That would probably change the future, wouldn’t it?
It’s done so before.
*Again, do animals have personalities? Then why do they mourn? Or enjoy the company of some people, or not others? I’ve raised cats who greeted everyone who came to the door, cats who ran from anyone not part of the immediate family; and cats boldly indifferent to strangers, seemingly aloof to everyone. That cat reached out its paw to touch my hand and reassure me as I told the vet to relieve its suffering and end its terminal condition. That cat looked me in the eyes. That cat was communicating with me. That cat, not for the first time, was expressing love, and complete trust.
I probably can’t convince you that’s true. But you can’t convince me I’m wrong. Is that a function of intelligence? Of course it is. Because intelligence is not solely rational and not merely coldly logical; at least as “logic” is colloquially defined (Mr.Spock, not Aristotle or Kurt GΓΆdel)
People who insist animals don't or can't have personalities... I don't think it's just ignorance- there's something else going on there. Something to do with judging the "worthiness" of humans by comparison, I suspect.
ReplyDelete