Saturday, February 18, 2023

🗑️

Uh-oh:
Large language models have no concept of 'truth' -- they just know how to best complete a sentence in a way that's statistically probable based on their inputs and training set," programmer Simon Willison said in a blog post. 
"So they make things up, and then state them with extreme confidence."
So generative AI is my phone, just with more computing power. But it functions with “extreme confidence”?

Anybody remember the other ST:OS episode where the inventor of the 23rd century computer tries to top himself, and creates a computer powerful enough to run a starship with “extreme confidence”? Yes, everything goes wrong, but it’s because the designer confuses computer functions with human ones.

If:
A chatbot, by design, serves up words it predicts are the most likely responses, without understanding meaning or context.
Where does the "confidence” come from?
It's very lifelike, because (the chatbot) is very good at sort of predicting next words that would make it seem like it has feelings or give it human-like qualities; but it's still statistical outputs."

 The problem is not with AI; it’s with how we understand AI. Or at least the concepts we apply to AI. Although that old programming adage is still true: garbage in, garbage out.

Laurent Daudet, co-founder of French AI company LightOn, theorized that the chatbot seemingly-gone-rogue was trained on exchanges that themselves turned aggressive or inconsistent.
In other words, once again we have met the enemy, and he is still us.


1 comment:

  1. I have not followed these AI stories closely, and can't say I understand much about it, but I think it's fascinating that it says that the machine tends to say what the average out there tends to be, not what happens to be true.

    My understanding of how Trump formulated his rhetoric in the 2016 election was that he would adopt and advocate whatever over time got the greatest applause in his rallies. (The biggest, I think, was building a wall on the southern border and making Mexico pay for it.) If this is how Mr. AI is forming its judgments, we are all in a heap of trouble.

    Kind of reminds me of the old game show Family Feud. Unlike egghead games like Jeopardy, you didn't win by giving the right answer, but giving the answer closest to some random survey.

    Tell people what they already think is right. It's the key to human demagoguery, and apparently the machines are catching on.

    ReplyDelete