Sunday, May 03, 2026

The God Saturated Universe

 My father was a CPA. I was an adult before I fully understood what that was. I knew he “did taxes” every year, because his workload increased exponentially between January and April. Otherwise, as a child, I could never explain to my friends what he did for a living. Well, he wasn’t a doctor or a police officer or a fireman, so….

He would explain his role this way: a bookkeeper, he said, would say “2+ 2 =4.” An accountant would say: “what do you want it to be?”

I used to think our quest for absolute certainty was the problem. We want 2 +2 to always equal 4, except we all know that’s often not the answer best suited to the task. It matters if it catches up to reality: consider Spirit Airlines, which by all accounts was always running just ahead of the financial calculation by any means possible, until sharply rising fuel prices put the bookkeeper in charge over the accountant. But even then, some blame, not arithmetical certainty, but any scapegoat we can find. (Preferably Joe Biden, who’s DOJ sued to block the merger of Spirit and Jet Blue, which in hindsight would have absolutely saved Spirit from extinction. Or the blame falls on Sen Warren, who praised the judgement of the court in upholding antitrust laws. Others argue Spirit would have soon bankrupted Jet Blue, whose business model is not much stronger anyway.)

Reality is always a matter of “what do you want it to be?”

So now we have LLMs and always someone in replies on Twitter asking seemingly empty air (I never see the answer in replies) named “Grok” whether the assertion in the original tweet is “true.” Because a computer program only deals in “objective knowledge,” and therefore only discerns truth. Or some such nonsense. 

My interaction with "AI," much of it involuntary and unwilling, leads me to the conclusion that it's just a souped up search engine that copies and regurgitates content that is posted online. Some have noted it is an automated stealer of content that is then automated to repackage it, seemingly on the basis of its currency based on how often it is clicked on in web searches - so it probably is also stealing the results of automated searches of the past instead of evaluating the quality of what it steals. It has the same relationship with original thought that Temu has with original design in that regard.
TC crystallizes my thoughts exactly; although I can’t say I’ve ever interacted with AI, I’ve just seen the results. And I’m not impressed, precisely because it’s clear to me AI is not thinking: it’s just regurgitating. 

I knew a student in graduate school who was a perfect example of AI, although at the time even PC’s didn’t exist. He had not a single thought of his own; well, not about the subject matter of our seminars. He simply repeated whatever the professor had said, sometimes practically verbatim. The difference between him and AI was that, he didn’t fool anyone. I encountered a similar student in seminary. She came in trailing clouds of glory. The rumor mill immediately dubbed her the smartest student in the school. The word from the professors, however, soon trickled down that, no, she wasn’t. And, as in graduate school many years before, her performance in seminars soon proved she was just parroting ideas with absolute confidence, but no idea what the words meant. 

She was very confident in her authority. She was vacuous in her knowledge. (She graduated, btw. After all, she displayed knowledge, even if she didn’t really have any. Ultimately, schools can neither guarantee knowledge nor wisdom. That’s another disturbing reality.)

We like authority because it removes responsibility from us. Why struggle to learn, and think, and reason, if you can just ask an LLM for the answer? And if it gives you the answer you like, well, you must have been right all along!  And certainly the LLM is authoritative. Look how much it “knows”! But LLMs just regurgitate content, or assemble content to fit the request, like law cases in legal briefs that don’t exist. The LLM is not a lazy law clerk; it simply doesn’t understand the difference between reality and request for output.  It doesn’t think; it simply provides patterned responses. The AI on my phone has learned to “anticipate” the words I will use in a sentence, based entirely on pattern recognition. It’s not eerily sentient, it’s just convenient, since I’m typing with one finger on a virtual keyboard. (My thumbs are way too fat for that two thumbs technique.) An LLM has access to more data than my phone, but that’s it. Otherwise, there’s not much difference between the two.

But we want there to be, because we want an authority to relieve us of the uncertainty of reality. Richard Dawkins has always tried to reduce existence to an either/or. And to reduce that to an absolute certainty he can leave alone. So, a little time with an LLM, and he’s done. Consciousness solved, the whole matter put to bed. The universe is explained by how much it approves of Richard Dawkins.  Or, at least, how much a computer program does. Reality is not reducible to an arithmetical equation. (Bertrand Russell spent years establishing the logical basis for 1+ 1 =2. Turned out his argument had nothing to do with reality at all.) And it really does come down to: “What do you want it to be?” Because aside from the few analytical statements Hume said we could make (and they aren’t really significant), the rest is a matter of argument. And it’s that uncertainty that makes us seek authority to establish a certainty we can cling to. Because the alternative is responsibility; our responsibility.  Responsibility for determining who we are, responsibility for determining what we should be doing. Responsibility for determining how we should then live. 

Which is a much more frightening matter, indeed.
 

No comments:

Post a Comment