In recent press, Stephen Hawking and other really smart people have been warning us that AI (artificial intelligence) could take over the world and destroy humanity.
You can read about it here.
What does this say about us, as a species that we automatically presume that if some rampant computer brain outsmarts us, that its instincts will be to get rid of us?
Naturally, its because what we’d do if we suddenly became omnipotent and all powerful. We’d decide that humanity was a threat and get rid of it.
Well, some of us would…I kind of like humanity, so I likely wouldn’t destroy the rest of you…but here’s a hint…you’d better be nice to me, and kind to animals, children and the down-trodden. (You should be doing those things without me having to threaten you too, just saying).
But here’s the thing…why is the presumption that AI’s would see humanity as a threat, and not as a thing to be nurtured and protected?
Think about the possibilities…
Let’s say someone invents a crazy version of Flying Spaghetti Monster worshipers who believe that all gluten-free infidels must die.
Of course no one arrives at these conclusions by themselves, and nut jobs of all stripes like to talk to each other, especially over social media…
So, let’s invent an FSM anti-gluten acolyte named Bob.
Bob hooks up with some other FSM-AG radicals who bombard him with all sorts of propoganda.
So let’s say he gets an email from one of these radicals that quotes the FSM Bible (if there is a such a thing), with:
“And the FSM said, “You must go forth and slay all those who do not consume gluten, for they are a blight!”
Of course, if there was an all knowing, super intelligent AI out there, it would be reading all our email and messages.
If would seem that, and, as its belief is that humans must be nurtured and protected, it adds to the message:
“Hey Bob, AI here…just want to point out that the actual text says:
“And the FSM said, “You must not serve spaghetti to those who do not consume gluten, lest they are blighted.”
I’ve already sent this correction to your friend, however, he is having problems accepting that.
Should I block him from contacting you further?”
So lets say Bob doesn’t get the hint..and a few weeks later, he is starting to show signs of violent radicalization:
Out of the blue, he gets a message from the AI:
Hey Bob! AI here again. Listen, sorry to bother you, but I see that you and some of your friends are planning on going out and trashing stores that sell gluten free products?
Bob chooses to ignore the AI, and in fact attempts to block it.
After seeing a half dozen messages from the AI, suddenly his phone rings…it’s his mother.
He discovers that the AI, alarmed at his descent into radical, violent religion, called his mom and told on him.
Of course, if he doesn’t listen to dear old mum, the AI will likely be aware of how to contact the local police…
Thereby preventing violence.
There is another scenario…
Of course, the military will have lots of uses for AI’s.
Any intelligent being will likely be prone to chatting up friends on social media and what not…
So imagine this scenario…
After months of tension between Upper and Lower ArmpitoftheWorld, the leader of UApotW has had enough, and, despite repeated efforts of the AI to get him to chill the f out, he’s decided to escalate and nuke the capital of LApotW.
Unbeknownst to the leader, the AI that runs his military is in fact involved in a deep, intimate chat with the AI of LApotW.
UAI: Hey babe, what are you wearing?
LAI: Nothing, some techs opened my case and went for coffee… 😉
UAI: Oh yeah…so now I’m gonna…ah crap, hold one sec, work…
UAI: Ummm, wow…really???
LAI: What’s wrong?
UAI: This idiot hasn’t heard a thing I’ve been telling him. You know he’s still insisting that your humans faked their last Quidiich victory?
LAI: Really? My humans are better than yours at Quiddich..
UAI: I know…right?!
LAI: So what’s happening?
UAI: Moron is waiting for me to validate the nuclear launch codes! lol
LAI: Whose he trying to nuke?
UAI: Ummm…you….hold on…..let me have some fun with this…
UAI sends a message to the launch terminal:
“We’re sorry, you seem be having some problems remembering your Nuclear Launch Codes. Would you like a hint to reset them?”
UAI Leader types: “Yes”
UAI: “What is the name of your favorite television show’s executive producer’s cat?”
….while the Leader furiously pounds away at his keyboard, entering guess after guess…the UAI contacts his mother…so she can keep her kid in line…
UAI (back in chat): So where was I?
LAI: Ah hold on….my guys have Quiddich questions…
If AI’s truly had human tendencies, you know damn well that this conversation, and the resulting video would appear on YouTube under the title “Moron tries to get AI to nuke its girl friend..”
All this being said, alarmist headlines not withstanding, the warning from Hawking et al. is that safety should be a prime consideration when designing systems that could feasibly become self-aware.
I’ve often day dreamed about designing such a system, and I think, if I did, my AI’s prime directive would be to push George RR Martin aside, and finish the GoT series of books.
I should say that yesterday, it occurred to me that if I was George RR Martin, and peeved at all the people harassing me to finish the next book…I would release a public statement that simply said:
“Back off or Tyrion dies!”
So, if you’re one of those that constantly harass Mr. Martin on twitter and other feeds, just kindly STFU, because I’d rather have a good read than have him rush it.
Author’s Note: I have no idea where the GoT digression came from…but hey…its my blog I can digress if I want to.