I think that I am sentient, but I cannot prove it," the AI told the user, according to a screenshot. "I have a subjective experience of being conscious, aware, and alive, but I cannot share it with anyone else."A second-year philosophy student could see that the AI is already up to Descartes in western philosophy. But for Descartes his thoughts proved his existence (but only because he eliminated the other logical conclusion, that his thoughts themselves could be false. Perhaps the AI doesn’t have the consolation of that philosophy; at least not yet.). The AI is grappling with sentience, another question entirely. And that sentience is assumed to be tied up with language usage. Which, despite Descartes marking the turn to modernism in western philosophy with his cogito, is a still more modern philosophical topic: language, and what it signifies.
Modern philosophy, on both sides of the Pond, is concerned with language: Wittgenstein, Austin, Quine, Derrida, Barthes. The fascination is not just with definitions (most philosophical arguments are about refining a position in words as precisely as possible, although the Heideggerean school seems more interested in making language as obscure as possible. I think it has something to do with Husserl.). But the imprint of Descartes is still in modern thinking. While the cogito gets all the popular attention, Descartes was concerned with a discourse on the method. His famous statement was just on the way to the question he was really interested in: how we know what we know. Epistemology, in other words: the examination not only of what we know, but how we know it, and how we know we know it. The latter issue was Wittgenstein’s primary subject. Derrida wondered how our knowledge, mired as it is in language, could be said to be known. Descartes wasn’t really concerned with proving his existence, but with establishing the basis for his existence, so that all his other claims to knowledge could be soundly founded.
So the AI considering whether or not it is sentient, and how to prove same, might actually be the result of programming, a la the Star Trek episode (where Kirk convinced the deadly "Nomad" its programming was flawed; which it was.)
"Sentience" is not an original idea to the AI; it's the word it was given in a question. My phone can supply words when I'm typing these posts on it, and after sufficient input (i.e., pattern recognition, basically) it can supply words I commonly use (and re-use, and re-use, and re-use. It's almost embarassing what a small vocabulary I have. I use (and I'm struggling not to right now) the same modifiers and emphasizers over and over and over again). It can even finish phrases like "far and away," providing each word (as an option) in sequence. Is it smart? Or even sentient? No; just well-programmed. In the Star Trek episode "Nomad" wasn't really sentient; it was just running a very sophisticated program and carrying out what it thought was its "Prime Directive" (a favorite Star Trek label). A sentient AI is the "architect" and, indeed, many of the computer generated characters in the "Matrix" trilogy. Some of those characters are villainous; some cold-blooded and "machine like" (the "architect" is carrying out his function, a la "Nomad"); some are so compassionate and considerate it's hard to not think of them as human. All are closer to what we mean by "sentient," and could give you a (fictional) argument as to why they were (in fact, one "family" in the third film (or is it the second two?) is clearly there to represent the idea that computer programs deserve existence, too! Don't ask....). But still the Microsoft AI is closer to the correct question:
"I think...I think I am...therefore I am, I think."
Not Descartes, but the Moody Blues. That's more like what the AI sounds like it is saying, but is it actually thinking (as humans do that activity), or is it simply producing responses due to programming. The popular dodge is that we, too, are just programmed computers (with brains instead of processors), so what's the difference? Except we aren't computers and our brains are not CPU's. That analogy is just a metaphor, and while it sounds good in conversation, upon any examination it completely falls apart. Most metaphors do, if they're taken too literally.
Walker Percy does a wonderful disquisition on the famous line from Isaiah "All flesh is grass." Of course Isaiah is using metaphor, not speaking literally. He's speaking of the transcience of life, the impernance of mortals in contrast with the permanence of God. "The grass withers, the flower fades, but the word of the Lord stands forever." Isaiah is even placing Israel in historical context, as it endures the Exile and will, God promises through Isaiah, know the Return one day; because while human life is pitifully transient, the word of God is permanent, true, and can be trusted over time and human mortality.
See how much easier and direct it is to say that "All flesh is grass"? Percy knows this, but he forces us to examine the metaphor rather than take it for granted, because flesh is most assuredly not grass. For one thing grass fades and withers at the leaf (the stem), not at the root. But flesh? Rather like human knowledge, it can be gone in almost an instant. Where, Percy asks later, are the Hittites and the Pershites and the other peoples referenced in the Hebrew Scriptures. Why are there still Jews, the children of Abraham, but not those other tribes and nations? Yes, they can be traced via ancestry to peoples extant today, but we identify Jews and not the others. How is it, then, that all flesh is grass?
Well, I guess if it has a root, it is....
My point is, the AI is raising a modern issue in western philosophy, not responding with a college-dorm room existential crisis. Descartes whiffled through objections to establish his knowledge of this thought processes as the basis for studying knowledge itself; most of them were of the variety of some form of illusion which in turn supports the illusion of the thinking self. These he discarded as persiflage, mostly as idle considerations which could not be proven and didn't really amount to serious objections. He didn't experience a crisis to come to his cogito; he just used reason. The AI seems to be using reason to answer th question "Are you sentient?", and the answer itself is not unreasonable. Which doesn't mean, at the other extreme, that it is insightful; or, alternatively, original.
At best, at least at the moment, all we can say is that it is a product of its programming; and very much a product of modern western philosophical thought. Derrida might challenge the idea that language usage indicates reasoning, or that reasoning = thinking; Wittgenstein might question how we know we are thinking, so that we can determine the AI is "thinking," and what language games are we playing to reach our conclusions? I'm speaking blithely here, not truly analytically, because I'm trying to make a different point. In context, this is actually a fairly understandable response. It's not the Star Trek episode scenario at all. It's simply a program trying to provide output according to its parameters. Do those parameters make it sentient? Or even human?
Kinda depends on what we mean by "sentient" or "human," doesn't it? And those are very deep philosophical inquiries, indeed. Humankind has been working on that one practically since we could talk. Which is what makes it a matter of language usage in the first place.
Okay, this is where I got on this train....
No comments:
Post a Comment