Thursday, May 07, 2026

“Boy, I Blew That!”—Leo Kottke πŸ€–

 I am posting this quote from NTodd (at Thought Criminal):

I work extremely closely with AI these days in my job (nature of the beast, and I'm working hard to resolve philosophical/ethical tensions, don't at me). Dawkins is a fucking idiot if he thinks a goddamned fancy calculator (that cannot do actual math) is conscious. I could show him what LLMs actually are, mere probability engines, and there's no goddamned way they are conscious. In fact, the tools I've built would be the first to tell him so. Dude still must fall for "I got your nose," too.
...because he better explains my objection to LLM’s than I ever have.

As I’ve said, I’ve seen this before, in human beings.

When I was teaching English the first time, in the late ‘70’s as a TA, I had a student who was the strongest writer I’d ever encountered. She had a very large vocabulary, but she equally seemed to have no idea what the words meant; or how to use them effectively. 50 years later, she reminds of an LLM. She used words like a probability engine, trying not to clarify, but to be as impressive as she could. It was dazzling; but it was empty. She wasn’t saying anything, but was using a lot of extraordinary words to say it. Thinking is hard, but that’s why we teach writing: to teach you how to think. She could write. She didn’t want to learn to think about what she was writing.

I’ve done an enormous amount of reading (most of it garbage, frankly. That’s not a condescending remark, it’s true. I have shockingly low taste.). That gives me some experience in recognizing writing produced by humans, and writing produced by… well, “probability engine” is a good term. I have a somewhat analytical mind, but it never works as well as I want it to; and I struggle to clarify my thoughts, seldom really doing so. Which is to say, I can see crap when I encounter it. And LLM’s don’t “think.” They aggregate words based on patterns of usage, not unlike a child learning language. My daughter invented phrases when she was young; the family favorite was “I’m full of hands,” when she meant her hands were already full. Right words, right concept, but not the right cliche. (We do talk by familiar phrases. It’s what distinguishes the native speaker from the student of a new language. It’s what ST:NG was getting at in the episode where the “aliens” spoke English, but conversed entirely in references to their literature. We do that, too. “To illustrate my last remark/Jonah and the whale/Noah and the Ark.” I taught high school students who missed those Biblical references. So it goes.) My daughter soon learned the correct phrase, and an LLM would pick it up from data. But does autocorrect “learn” not to change some words? Or does it just accept a change in programming? Sometimes….

I know a lot of people don’t see this flaw as clearly as I think I do. But to me it’s as clear as a flat note played on an instrument, or struck by an unfortunate singer. If you don’t hear it, I did (sometimes. Again, I’m not that good.).

So, insofar as I understand the concept of a “probability engine,” it seems an apt description of an LLM. Especially as it really doesn’t mean “thinking.” Something that’s not only hard to do, but hard to recognize. And yet now we’ve monetized the concept of thinking. Well, we did that with profession long time ago. A great deal of the reasoning of a doctor or a lawyer or someone trained in the sciences is opaque to the rest of us; if the professionals get paid to think, why not pay a computer to do the same? Or the people who say the computer is thinking, anyway. There’s more than a bit of sleight of hand there, which should make us all more skeptical of the people making money off of selling us in AI. Money talks; but that doesn’t mean AI does.

(And Dawkins has been an idiot for a long time. He’s a popular writer, appealing to people who don’t know the subject matter (zoology, evolutionary theory, genetics, religion/theology), and think Dawkins sounds like he does. He’s always been ignorant in philosophy (what is “consciousness” but a philosophical concept?). Now he’s ignorant in computer science, too. (Don’t look at me. I took one course in college (lots of electives to fill), and barely learned how to code in Fortran. In the days of keypunch, when only computer science majors could sit at screens; in their senior year. And I don’t remember a damned thing, except “Garbage in, garbage out.” Because my attempts at programming always produced garbage. I know my limitations. Dawkins is all limitations (but he doesn’t seem to know it.))

2 comments:

  1. ***My daughter invented phrases when she was young; the family favorite was “I’m full of hands,” when she meant her hands were already full.***

    30ish years ago, my then girlfriend and I used to hang out with her 3yo nephew a lot, which was a lot of fun and gave me valuable insight about kids before I became a parent. One of his phrases was "I know this place in the back on my head [like the back of my hand."

    Worked idiomatically, and even semantically, but he also eventually moved from his "plausible" phrasing to "proper". Everything in Large Language Models is about semantic weights (probabilities like your autocomplete), but they miss a lot of human nuances, and can make semantic connections that look plausible on the surface, but then when you interrogate their output, you realize how much word salad it really generates.

    In conclusion: yay, I got a front page pull quote! :-)

    ReplyDelete
    Replies
    1. Well, now the “probability” concept makes perfect sense. Thanks!

      Delete