Thursday, January 26, 2023

AI Did It

An AI that writes strikes me as verifiable proof of the infinite monkeys theorem.  If that theorem actually involved monkeys, typewriters, and infinity:

The infinite monkey theorem states that a monkey hitting keys at random on a typewriter keyboard for an infinite amount of time will almost surely type any given text, such as the complete works of William Shakespeare. In fact, the monkey would almost surely type every possible finite text an infinite number of times. However, the probability that monkeys filling the entire observable universe would type a single complete work, such as Shakespeare's Hamlet, is so tiny that the chance of it occurring during a period of time hundreds of thousands of orders of magnitude longer than the age of the universe is extremely low (but technically not zero). The theorem can be generalized to state that any sequence of events which has a non-zero probability of happening will almost certainly eventually occur, given enough time.

In this context, "almost surely" is a mathematical term meaning the event happens with probability 1, and the "monkey" is not an actual monkey, but a metaphor for an abstract device that produces an endless random sequence of letters and symbols. One of the earliest instances of the use of the "monkey metaphor" is that of French mathematician Γ‰mile Borel in 1913,[1] but the first instance may have been even earlier.

Variants of the theorem include multiple and even infinitely many typists, and the target text varies between an entire library and a single sentence. Jorge Luis Borges traced the history of this idea from Aristotle's On Generation and Corruption and Cicero's De Natura Deorum (On the Nature of the Gods), through Blaise Pascal and Jonathan Swift, up to modern statements with their iconic simians and typewriters. In the early 20th century, Borel and Arthur Eddington used the theorem to illustrate the timescales implicit in the foundations of statistical mechanics.

An AI program does not work randomly, in the mathematical sense; but I would argue it does work randomly in the generally accepted sense.  Unless we can attribute consciousness to the program, meaning it understands both the words it uses and the ideas it produces with those words.

Then again, most of my students could barely be shown to clear either bar, let alone both, so maybe that's unfair to the computer.  (No, I'm not being sarcastic.)

Then again, the theorem, taken as monkeys producing text which eventually or inevitably produces Shakespeare, is an interesting gloss on the so-called "Turing Test," which some have said AI has already passed.  If monkeys could produce Shakespeare, would that pass the Turing Test, too?  Why, or why not, as the old essay test prompts used to ask?  The theorem is just a metaphor, really; but if we could create the reality with an AI program, would it prove Shakespeare superfluous to needs?  (Where "Shakespeare," too, is a placeholder for creative human beings.)

I wonder.  Frankly, I can't (yet) tell the difference between the claims currently being made for AI and the monkeys, except the processing speed of computers reduces the need to stretch time out to infinity and the requirement of enough monkeys to concievably achieve the goal.  Because the presumption of the theorem is not just a non-zero probability of something happening, otherwise we wouldn't need the theorem.  We could just settle for:  "It could happen!"  Which is pretty much what "any sequence of events which has a non-zero probability of happening will almost certainly eventually occur, given enough time" means when it's at home.*

So has AI created a brave new world?  Or has it just created an infinite number of monkeys with an infinite amount of time, energy, paper, and typewriter ribbon?

The key to the metaphor is that the monkeys don't know what they are doing, but given world enough and time, they, too, could do it (mostly because monkeys have hands similar to ours, though I'm not sure their thumbs would be much use on a keyboard; but their sitting posture is much like ours, too…) Except, of course, they couldn't; and that's what makes AI both A and I.  Supposedly.

But what is intelligence?  AI is reportedly writing essays and class papers; but that only proves class papers and essays are by and large a matter of tropes and cliches du jour.  Few of us write like, or want to write like, Bacon or Montaigne (or Shakespeare, for that matter.  As magical as Shakespeare's characters may be (magical in the sense of fully human), it is Shakespeare's language that is the marvel.  Could he do the same things in Modern English that he did in Early Modern English?  Signs point to "No."  But who can say?).  Is that good, or bad? If we programmed AI to reproduce Bacon's style, would it be praiseworthy?  Or just seem like a waste of computing power?  Who needs  Francis Bacon redux, after all?

So is AI really I? Or is it just reductively approximating an human activity and repetitive of whatever it is programmed to produce, which is whatever the programmer and the approving audience deems “good writing”? Because AI doesn’t know what good writing is. It doesn’t even know what it is doing (nor is it aware it’s doing it). Even a dog or a cat, which can display intelligence, is aware of its self as a being (without being necessarily self-aware) in space and time (again, without necessarily abstract concepts of either) and in relation to other creatures.

Is a computer running an AI program?

And then there’s the question of creativity which, despite the tweet, AI is nowhere near. AI cannot exceed its program and write in a style original and unique, much less create the neologisms or deathless phrases of a Shakespeare (or even the conceits of a Donne). If our modern language changed enough again, as in Elizabethan times, to encourage invention and new artistry among our poets and writers, would AI lead the way, or only at best slavishly follow along? And there we reach the real limits of AI for, never being human, it can only be useful as its output is judged so by humans. And AI, not being human, will always at best ape humans; which means always, at best, being one step behind humans.

About writing Twain said:

The difference between the right word and the almost right word is the difference between lightning and a lightning bug.
Will AI ever know that difference? Only if it can be human, and so think like one.


*Yes, I know, the theorem is an explanation of why it could happen. I’m not being reductionist, just focusing the argument for rhetorical purposes.

2 comments:

  1. One of the first questions that never seems to be asked is which humans does the computer convince it is a person, does that have to be 100%? If not then it might be more a measure of human gullibility than of "machine intelligence" . If it's AI "scientists" that it has to fool, well, who died and made them definitive? The Turing Test was not Turing's greatest idea.

    As for the Twain quote, it's hard enough for a great writer to know the difference between their lightning and a lightning bug on any given day. Rewrites, stuff that looks like junk a day or week later, entire novels that are a disaster from an otherwise fine writer "The Ponder Heart" comes to mind. Even the greatest writers seldom had more than a small handful of great books in them and lots of them produced real stinkers. And there is my favorite phenomena of living "great writers" whose work immediately goes in the disused category shortly after they die and the critical and academic promotion of their "genius" dries up.

    ReplyDelete
  2. Oh, and I just looked up James Marriot, YT musician and "content creator," No wonder he wants to demote human creativity. I wonder if he ever heard the expression "sour grapes."

    ReplyDelete