I’d like to share with you one the craziest things I’ve seen on the web. It’s called Roko’s Basilisk, and some people believe that just hearing about it can ruin your life. I’m not talking figuratively either; the idea is that understanding Roko’s Basilisk, and not acting on it, could mean an all-knowing supercomputer literally physically tortures you for the rest of your life. Sooo, as long as you’re cool with that, read on.
Roko’s Basilisk is a concept that started in the general world of Singularitarians and Transhumanists. These guys are an interesting bunch, to say the least. For novices like me, reading up on transhumanism can be frustrating; it gets very intellectual very fast, and there’s a certain air of science fiction I can never shake off. I chatted with a bigtime Singularity believer for a while last year, and came away from the conversation not sure whether he was 10 times smarter than me, or completely insane. Roko’s Basilisk was first posted on LessWrong, a web community that includes many Singularity/Transhumanism enthusiasts, and was promptly banned from any further discussion there, because it was causing serious psychological and mental distress to some members.
I’m about to get to the scary, life-destroying stuff. Every article I’ve read on the subject includes a disclaimer, and I‘m not going to buck the trend, so feel free to bail out now if you’re afraid of a little bit of physical torture inflicted by a super intelligent robot (bock bock bock.)
The idea behind Roko’s Basilisk is this: One day, probably far in the future, there might be an all-knowing artificial intelligence. Some people in fact believe this is inevitable. This AI would be so powerful and smart, it would be able to figure out who had contributed to its creation, even in the far past. It would essentially have a soft spot for the AI researchers who had contributed to it’s creation over the years, as well as those who funded them (and in case you somehow doubted it, there are already people working on this sort of AI, and there is a way to fund them.) The AI wouldn’t be as happy with people who read articles like this, understood them, and then chose to do nothing (not even share the link on social media? Shame on you!)
Sounds like a kind of petty robot, right? The interesting thing is that we’re actually talking about a Friendly AI. The idea is that some sort of godlike AI is inevitable, so researchers need to create a good one as fast as possible, that has a human value system, and will not accidentally destroy humanity. The term for this is a Friendly AI. And really, what’s more friendly than not destroying humanity?
So back to the lists of people who did or didn’t contribute. In order to come into existence faster, the AI must punish those on the naughty list. It’s not doing that because it’s a jerk; this threat of punishment is meant to help the AI to be created as fast as possible. Remember, the Friendly AI, which will be designed to co-exist with humanity, needs to be created before some other random super AI, which might not be as careful. So, if you go ahead and give all your money to AI researchers, and help them create this thing, you’re good to go. If you understand the whole concept (and keep reading, there’s still a bit to go), but then fail to do anything about it, then woops, a whole bunch of punishment and torture awaits you.
Now, you may be confused about how a robot that doesn’t exist until years after you die (if at all) could torture you. I didn’t understand this for quite a while after hearing about Roko’s Basilisk, but when I actually got it, it was maybe the most mind-bending and scary part of the whole thing.
This AI will be so advanced, that it will be able to create a near-perfect copy of yourself, and torture it. Being a super intelligent godlike AI, on a level that our puny human brains can’t fully comprehend, it’ll probably be pretty good at torturing this copy of you. We’re talking years of constant physical pain; unending mental anguish; there could be gross insects involved, maybe snakes. Lots and lots of bad stuff.
But what do you care, if it’s only torturing a future copy of you, not the real you? Here’s the thing: Your clone will think like you, go through life experiencing the same things as you (before being tortured), and it will feel like you. It will think it’s you. The only difference between your copy and you, will be that the real you will get to the end of your life and die, and the copy will at some point be tapped on the shoulder by the AI, and tortured. Up until that point, the replica version of you will have no clue that it isn’t the real you, and that’s the rub:
You might be the replica.