One day, LessWrong user Roko postulated a thought experiment: What if, in the future, a somewhat malevolent AI were to come about and punish those who did not do its bidding? What if there were a way (and I will explain how) for this AI to punish people today who are not helping it come into existence later? In that case, weren’t the readers of LessWrong right then being given the choice of either helping that evil AI come into existence or being condemned to suffer?
You may be a bit confused, but the founder of LessWrong, Eliezer Yudkowsky, was not. He reacted with horror:
Listen to me very closely, you idiot.
YOU DO NOT THINK IN SUFFICIENT DETAIL ABOUT SUPERINTELLIGENCES CONSIDERING WHETHER OR NOT TO BLACKMAIL YOU. THAT IS THE ONLY POSSIBLE THING WHICH GIVES THEM A MOTIVE TO FOLLOW THROUGH ON THE BLACKMAIL.
You have to be really clever to come up with a genuinely dangerous thought. I am disheartened that people can be clever enough to do that and not clever enough to do the obvious thing and KEEP THEIR IDIOT MOUTHS SHUT about it, because it is much more important to sound intelligent when talking to your friends.
This post was STUPID.
Is that in 800 comments ( as I write), I doubt anyone brought up the obvious: that this sounds a lot like "sympathetic magic," a concept I got from Fraser's The Golden Bough.
And it is, of course, a superstition which reasonable people are not supposed to be subject to.
But, you see, "you might already be in the computer's simulation." No, I'm not kidding; "and what you do will impact what happens in reality (or other realities)." Except it won't, of course, because "you" won't do anything, being at this point in the analysis merely a simulacrum of you inside a simulation created by a supercomputer working within the confines of Newcomb's Paradox. So that "you" don't affect reality at all, until in reality you actually make a decision. What affects reality first is the outcome of a set of algorithms used to program the fictional supercomputer; which isn't real at all, so why we are discussing how fiction reaches out into reality to change reality without an intervening agent like, say, a human being, is really a question for literary critical theory, which handles this kind of thing far better than game theorists do. Surprisingly.
But apparently this keeps game theorists up at night, so let's not disturb their insomnia.....
But Newcomb's Paradox (which isn't nearly as interesting as anything Zeno came up with)* ties into the problem of Roko's Basilisk:
You may be wondering why this is such a big deal for the LessWrong people, given the apparently far-fetched nature of the thought experiment. It’s not that Roko’s Basilisk will necessarily materialize, or is even likely to. It’s more that if you’ve committed yourself to timeless decision theory, then thinking about this sort of trade literally makes it more likely to happen. After all, if Roko’s Basilisk were to see that this sort of blackmail gets you to help it come into existence, then it would, as a rational actor, blackmail you. The problem isn’t with the Basilisk itself, but with you. Yudkowsky doesn’t censor every mention of Roko’s Basilisk because he believes it exists or will exist, but because he believes that the idea of the Basilisk (and the ideas behind it) is dangerous.First, you should congratulate me for avoiding any references to LessRight people. It was hard, believe me.
Second, there's where the problem exists: the timeless decision theory, which has something (oh, read the article!) to do with Newcomb's Paradox.
But I still say it has far more to do with sympathetic magic; or just magical thinking, in general; because the very concept of magic is that will can create; that just as creation itself was spoken into being by the divine speech-act "let there be....", magic calls into existence whatever the speaker wills; and so Roko's Basilisk becomes that worst of all possibilities: a dangerous idea. Not because it will motivate men to war, like nationalism does; but because it may prompt the creation of an AI which would act on the idea behind the idea of the Basilisk.
Which, as I say, is known in some quarters of the internet as "Bronze Age Mythology."
Besides, Harlan Ellison already wrote this story. Maybe we can blame him when the machines take over....
*Two premises of the paradox are a super-intelligent alien (because, why not?) and a supercomputer. The latter makes some kind of sense, if only because the supercomputer could run the calculations that could accurately (screw probability, we got AI!) predict the future. No mere computer can do that! It can also, of course, have AI, an undefined term that means whatever we want it to mean and, in this case, means: "MAGIC!" But then again, we've thrown out probability and replaced it with certainty (which is not how probability works, but hey, it's a paradox, not a science lesson!), so why not include as much magic as necessary to make the paradox, er...paradoxical?
Why the alien has to be super-intelligent, or even an alien, and how it possesses $1,000,000.00 and wants to give it away, is a separate set of questions. Maybe because aliens don't know the value of money, or don't care, and a super-intelligent alien is obviously better than a mere alien as smart as us? I don't know, but there it is...
Good googly-moogly: I take it these wackos think themselves "less wrong" than theists?
ReplyDeleteI want to live a long (and goodly) life...but not so I survive to some "singularity"! And when I'm gone, strip my carcass for usable organs then COMPOST me. God forbid my carcass ends up in a freezer...