Tuesday, February 14, 2023

"These Are Not the Droids You Are Looking For"

Before I forget: the comparison is made in this article to Tesla's attempts to build a self-driving car. But in another article on that subject, the only example of a truly self-driving car available was a car on a closed course where the area had been mapped extensively into, effectively, a data base for the computer. I also had an article (which I don't think I preserved; I'm getting sloppy in my older age) about an MIT study which found the computing power necessary to create a truly-self driving vehicle would require so much energy the car itself would (factoring in manufacturing and non-renewable fuels for power generation to recharge the batteries) not have a net positive impact on greenhouse gasses. It would also require larger (and heavier, presumably) batteries to allow the car to function at all.

Which, you know, makes sense, because from a biological point of view the human brain consumes an enormous amount of energy from the fuel (food) we provide to it. Almost too much, from a purely biological standpoint.

Anyway, the problem of the self-driving car is that you have to give it so much information that you can't trust it to function unless it already has all that information.  Whereas any reasonably competent human driver can navigate an unfamiliar setting (especially, these days, with the help of GPS) on the first pass.  I can, in other words, drive through the maze of highways, interchanges, overpasses, on and off ramps, of Dallas/Fort Worth (not my home turf) relatively well with GPS, even though I'm just passing through on my way to or from home base. (Before GPS I did it with paper maps and the Lovely Wife navigating.)  A computer can't do that because it can't take in enough data, without a tremendous amount of preparation (not only of the route but of the surroundings; all the ever changing variables which an attentive driver is coping with.). Which is why, I presume:

While great progress has already been made, the last push toward a near-100 percent reliable vehicle is proving far harder than the bulk of work that was already put into it.

The devil is in the details.  And navigating a motor vehicle through the real world without striking objects both mobile and stationary, as well as managing to get from point A to point B without being either a traffic hazard (too slow and cautious) or a hazard to traffic (far too fast and reckless), is a great deal more complicated than just the type of sensors the car has.  The variables most drivers learn to cope with (enough of us do that roads and highways aren't just impassable with crash scenes) is more than AI, so far, can handle.  And when it can, will it require so much energy (both as computing power and as electricity to power that computing power), that it will be pointless to develop the software and the hardware to operate it?  Human beings cracked that problem but, as a practical matter, why do we need to reinvent the wheel?

And it's not really just a problem of self-driving cars.  It's the question of the value of AI in general:

"If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a seven-year-old, eight-year-old kid that happens to know physics," Lemoine told the newspaper.

Reminds me of the old joke: "Be alert!  The world needs more lerts!"  Do we really, IOW, need more 8 year olds who know physics?  Is that somehow a benefit to us?  I mean, sure, it's not Skynet deciding to destroy all humans, but is it really an advance in human capabilities, a technology we will find a valuable use for?

Here is what the Microsoft AI program responded when it was asked if it was sentient:

"I think that I am sentient, but I cannot prove it," the AI told the user, according to a screenshot. "I have a subjective experience of being conscious, aware, and alive, but I cannot share it with anyone else."

You can say the AI is using language, so it does have a "subjective experience of being conscious."   But my daughter's dog responds to his name, and has a "subjective experience of being conscious."  In fact he came to our house the other day and expected a water bowl on the floor for him (which we usually put out when we know he's coming; we didn't know it that day).  So he went over the water dispense and put his front paws on it, then looked at us until we got the message.  He's not exactly using language, but he is communicating. Is the Microsoft AI's "subjective experience of being conscious" at even that level?  The main argument of Hubert Dreyfus' critique of AI is that intelligence is a function of a living being, and cannot be understood, or even realized, outside the consciousness of a body.  My daughter's dog knows both the Lovely Wife and I, and now even recognizes the exit off the freeway that leads from his house to ours.  Which means he learned it as fast, or faster, than AI, since he learned it from observation, not from programming.

There are other problems with AI beyond the problem of computational power exceeding capacity of the system to provide energy commensurate to the task (humans solved that biological problem sometime back, but whether we can solve it for computers remains to be seen), and that problem is a mathematical one.  Or maybe it's more appropriate to say, a meta-mathematical one.  It's the problem of Godel's theorem of incompleteness:

Godel's conclusions bear on the question whether a calculating machine can be constructed that would match the human brain in mathematical intelligence.  Today's calculating machines have a fixed set of directives built into them; these directives correspond to the fixed rules of inference of formalized axiomatic procedure.  The machines thus supply answers to problems by operating in a step-by-step manner, each step being controlled by the built-in directives.  But, as Godel showed in his incompleteness theorem, there are innumerable problems in elementary number theory that fall outside the scope of a fixed axiomatic method, and that such engines are incapable of answering, however intricate and ingenious their built-in mechanisms may be and however rapid their operations.  Given a definite problem, a machine of this type might be built for solving it; but no one such machine can be built for solving every problem.  The human brain may, to be sure, have built-in limitations of its own, and there may be mathematical problems it is incapable of solving.  But, even so, the brain appears to embody a structure of rules of operation which is far more powerful than the structure of currently conceived artificial machines.  There is no immediate prospect of replacing the human mind by robots.

Ernest Nagel and James R. Newman, Godel's Proof.  New York:  New York University Press, 9th printing, 1974.  pp. 100-101.

Nagel and Newman in 1974 limited themselves to "mathematical intelligence."  Today's AI doesn't have the "fixed set of directives" which "correspond to the fixed rules of inference of formalized axiomatic procedure."  But it appears they do; the latter, not the former.  The Microsoft AI went "Star Trek" (" 'I felt like I was Captain Kirk tricking a computer into self-destructing,' they added.") perhaps because it has the latter, if not quite the former (I would argue it has the former, too; but OCICBW, since I'm not a compuer programmer.).  The latter because what else explains that circular set of statements which end up chasing their own rhetorical tail? The fixed ruls of inference don't allow a way of out the logical loop.

"It’s important to note that last week we announced a preview of this new experience," the spokesperson told Futurism in a statement. "We're expecting that the system may make mistakes during this preview period, and the feedback is critical to help identify where things aren't working well so we can learn and help the models get better."

Better by doing what?  Tweaking the rules of inference?  Most likely. Education is how we would explain the situation to a child, who would mostly likely answer the question with simple confusion, and then move on.  A human would not be expected to get stuck in that loop.  "[A]s Godel showed in his incompleteness theorem, there are innumerable problems in elementary number theory that fall outside the scope of a fixed axiomatic method."  There are even more problems that don't involved elementary number theory which equally fall outside the scope of a fixed axiomatic method.  And yet, unless we truly understand and can adequately define human "intelligence" (you have 30 minutes.  Go!), how do we program it into a computer?  And how do we know when we've done so?  A Turing test?  As the programmers say:  Garbage in, garbage out.  But how do you know where the garbage is, or if there is any, if you don't have a method for testing the output?  And how do you test the output of a language (v. mathematical, I mean) AI, except subjectively?

What convinces you is a human response, might not convince me.  In that case, who's right, objectively?

The question still is: can a machine be designed for solving every problem (and is creative writing, for example, which is supposedly the field of ChatGPT, properly described as a "problem"?)?

The human brain may, to be sure, have built-in limitations of its own, and there may be mathematical problems it is incapable of solving.  But, even so, the brain appears to embody a structure of rules of operation which is far more powerful than the structure of currently conceived artificial machines.  There is no immediate prospect of replacing the human mind by robots.

And again, that's just with reference to the field of mathematics.  I understand mathematicians consider math "beautiful".  Would AI?  It might say so, but is it just following it's programming to do so?  To go back to "Star Trek," Mr. Spock frequently found some situations "Fascinating."  That's an emotional response, not a purely rational one.  That's in large part because the character was meant to appeal to human beings, so too much like a computer would be tedious (the Vulcan character on "Star Trek: Voyager" was much less "emotional."  Also much less appealing to the audience.).  Mr. Spock was also the product of human imagination, which cannot separate emotion from reason (if they can be separated, which is another question).  So could a humanoid being be wholly rational without being emotional?  What, then, of curiosity, the well-spring of knowledge acquisition?

Is AI curious?  Can it be?  This is all part of the problem of how we understand computers, and intelligence, and the human brain (or is it "mind"?):

The Biological Assumption

In the period between the invention of the telephone relay and its apotheosis in the digital computer, the brain, always understood in terms of the latest technological inventions, was understood as a part telephone switchboard or, more recently, as an electronic computer.  This model of the brain was correlated with work in neurophysiology which found that neurons fired a somewhat all-or-nothing burst of electricity.  This burst, or spike, was taken to be the unit of information in the brain corresponding to the bit of information in a computer.  This model is still uncritically accepted by practically everyone not directly involved with work in neurophysiology, and underlies the naive assumption that man is a walking example of a successful digital computer program.

....

The Epistemological Assumption

[A]lthough human performance might no be explainable by supposing that people are actually following heuristic rules in a sequence of unconscious operations, intelligent behaviors might be formalizable in terms of such rules and thus reproduced by machine.  This is the epistemological assumption.

Hubert L. Dreyfus, What Computers Still Can't Do:  A Critique of Artificial Reason.  (Cambridge, MA:  MIT Press, 1992), pp. 159, 189, 205.

The problem of "What is intelligence?," in other words.  We like metaphors of intelligence (or just human cognition, if you prefer a 'cleaner' term) that are machine-like, because then we can imagine (if not build) a machine that emulates it.  Because our intelligence just emulates the machine, eh?  Except it doesn't, and the metaphors fail us because they are simply metaphors, not models of reality.  Arthur C. Clarke in "Dial 'F' For Frankenstein" used the model of the brain as a telephone relay to imagine a SkyNet that "became aware" (and what does that mean, really?) when enough telephones were connected to a sufficently complex system to re-create the neural network (and what the hell is that?) of the human brain.  Considering how many more phones we have now than in the '50's when he wrote that story, it's a wonder that event hasn't happened yet.

Except, of course, that's not at all how human brains and human consciousness (of one's existence, if nothing else.  The definition of "consciousness" is another hurdle, along with whether thought occurs in the brain, or in the mind.  And where is the sense of self located, and what is it?) work. And of course now we assume we are just more sophisticated, or just complicated, computers; and the main thing about us as "intelligent beings" is our use of language, so when an AI program can use language....

'Round and 'round we go, "Alone, alone, about a dreadful wood...we who must die demand a miracle." And if we can't get it, we'll make up our own?

Yeah; good luck with that.

1 comment:

  1. 'Reminds me of the old joke: "Be alert! The world needs more lerts!"'

    My AP Calc teacher, Mr Murphy, had a sign with that joke in his classroom. Also, "Time flies like an arrow. Fruit flies like a banana."

    Still wondering when I'm ever going to actually use calc IRL. We invented computers to do math. Not drive my damned car.

    Really, operating a motor vehicle and voiding everything is so complicated from a processing standpoint, I don't know why we allow it at all, human- or AI-driven.

    ReplyDelete