I FOR ONE WELCOME OUR ROBOT OVERLORDS / by jean-francois dubeau

Let me preface this post if you don’t mind. I believe that Elon Musk is a genius. Beside being an incredibly successful businessman, he’s also a risk taker and a man who looks towards the future like few in his position do. While it’s undeniable that he profits from his many ventures, it’s difficult to argue that his various enterprises also serve to better humanity. When this great man one day shuffles off his mortal coil, the world will be less for his passing, but better then it would have been without him.

I also think he’s fucking wrong about a subject close to my heart and mind.

In fact, he stands with at least two other bona fide geniuses of our age on a position that I find absurd and short sighted. Both Bill Gates and Stephen Hawking, men who have accomplished more than I could ever dream to, share Elon’s opinion that once come the singularity, humanity is super-screwed.

I’m here to tell you to put down the pitchforks. Don’t destroy your computer or flush your smartphone down the toilet. There’s no need to revert to an agrarian society and fall back into full-on atavism. I assure you that despite some of the greatest minds of our time telling you the contrary; the inevitable Singularity and birth of Artificial Intelligence will not be the end of our species.

But first, let’s agree on something:

The Singularity Will Change Everything

While I’m going to argue that the coming of independently thinking Artificial Intelligence is unlikely to plunge us into a Terminator apocalypse scenario, I’m in no way going to pretend that once we share our world with thinking sentient robots, everything is going to be business as usual.

The fact is, as a species, we’ve never shared the planet with another kind of sentience*.. What does that mean in practical terms? Well, for the first time in our relatively short history we will have a  point of comparison for our very way of life. From our way of thinking, our morality, social opinions and even our outlook towards the universe we live in, everything we are and believe will be compared to how this other form of life interprets things.

Having a ‘roommate’ to which we can compare ourselves will alone drive incredible changes in how we behave as a species. Historically, beside wars over resources, civilization as a whole has always benefited from the meeting of different cultures (usually in the long term). There’s little doubt that sharing our existence with a vastly different form of intelligence will yield very much the same kind of benefits but on a much greater scale and with fewer growing pains.

How different is post-singularity intelligence going to be?

Robots aren’t 'Humans-plus'

For starters, we have to set aside the myth that sentient robots are going to be mechanically enhanced reinterpretations of us. While humanoid robots adopting our mannerisms and our emotional and intellectual limitations makes for great fiction, it doesn’t reflect the picture of the future our current reality paints.

For starters, the human form; two legs, two arms and a head, is by no means the ultimate physical shape for a successful creature. We tend to think it is because it’s what we’re stuck with and we’ve done pretty well for ourselves. However, this physiognomy isn’t the only thing available to an imaginative mind. Also, while it might be a successful shape to adopt for the environment we live in, it might not necessarily be the best for other, much more exotic places. In short; their form isn’t going to be a stronger, faster, tougher version of the human anatomy. It’s going to be whatever they feel they need. Bonus points if you realized that individual AIs will each have a unique form.

Body is, however, the least unfamiliar aspect of artificial sentience we have to consider. Movies, stories and even my own book, The Life Engineered (plug!) assumes that thinking robots would have similar psychological makeup as our own. That they’d have the same emotional needs and react the same way to adversity. Terminator’s Skynet feels threatened by humanity and launches a pre-emptive nuclear strike, for example. A very human reaction to fear. The machines in the Matrix fight humanity over resources in a way that builds an interesting narrative but ultimately doesn’t match with how an AI might approach such problems. Our synthetic descendants won’t think and feel like us because they won’t have brains that develop like ours. They will have their own way of making sense of the world and should they be emotional creatures, they won’t interpret emotions similarly, assuming they even have the same catalogue of emotions.

“Ah!” I hear you say because my ears are just that sensitive and I hear the future, apparently “but what if they don’t have emotions and decide that we humans have served our purpose in creating artificial intelligence?”

That’s a very human thing to think and it’s probably why it’s so wrong. You see…

Sentience isn’t iterative

Going back to how we haven’t really shared the planet with another sentient being yet, since we’ve been alone for so long, it seems our species has adopted the Highlander code of existence: “There can be only one!”

This makes for great drama but again, doesn’t fit the facts. We have to remember that:

  • We won’t be competing for resources. AI won’t want our food, our water or possibly even our air. They won’t fight us for our men/women and are very unlikely to challenge our claim to real estate.
  • There is no state of obsolescence for sentience (yet). The sad truth is we already struggle with ‘why are we here’. Having another species to share our lives with isn’t going to make that lack of existential purpose any worst. In fact, it might ignite the fire in our souls. We’ll have created new life after all! On to the next challenge.
  • Diversity breeds creativity. Having more then one type of sentience is more likely to benefit all parties then either being alone. More then the sum of our parts, etc.

The same way that parents don’t phase out older children as they breed, adding AI to the family of human experience won’t require the current model to be retired. All evidence points to coexistence being beneficial to all and should it not be, the lack of competition for resources means there is no need to eliminate the other for self-preservation. Even without collaboration, coexistence remains the most profitable solution.

The Singularity Won’t Be Sudden

This is probably one of my favourite points because it’s counter to how a lot of us see the Singularity happening.

By it’s definition, the Singularity is a cascading effect where computers become so sophisticated that they can make better versions of themselves faster then we can predict our control until they evolve beyond our anything we can comprehend and presumably become self aware. It’s an amazing concept that opens the door to a long list of science fiction tropes, each more entertaining then the last. However, let’s do a little thought experiment:

When you upgrade your computer, what are A: the reasons and B: the method.

Invariably the reasons boil down to being able to run more powerful software. Be it a more advanced game or a new version of an operating system, that’s why you upgrade a computer. The method is even more straightforward: you replace parts of, or the entire machine. For the runaway effect of the Singularity to happen, someone needs to design a machine that can support and sustain it. No one’s creating that advanced a piece of hardware by accidentally slapping two computers together. Watson was able to compete in Jeopardy because it was designed to answer understand and answer questions. Deep Blue played chess on a grand master level because it was designed to do so. These machines and others of their kind are impressive but couldn’t achieve sentience before hitting the plateau of their operating capabilities. At least, not without human interference.

I understand that fiction demands a machine designed to iterate around the ceiling of it’s limitations, but in actuality, when such a machine is built, no one will be surprised and it will likely be constructed with that purpose in mind.

Let’s not forget that AI is already slowly creeping its way into our everyday lives. Contextual searching, software that learns and adapts, voice and pattern recognition are all baby steps towards the singularity and yet in a matter of years they’ve become almost invisible to us. By the time the actual Singularity with a bit ’S’ hits, we’ll be so accustomed to these types of innovations that while still a change, the event will not rock us so hard as to start a man vs. machine war.

Artificial sentience won’t have our needs or limitations

The final and biggest discrepancy between fiction and reality where the emergence of sentient machines is equally the source of most human conflicts: resources. Traditionally, we fight our neighbours because we want what they have. We want their land, their spices, their oil, their riches. Machines will not have these needs. There are two types of scenarios for developing artificial intelligence.

  1. Physical robots. In this scenario AI develops in the physical world. While obviously the sentient machine is software of some nature, it interacts with itself and us through the physical world. It has a body, a voice, it sees and hears and moves. This is the scenario where the machine has the most obvious resource requirements. It will need power and materials to build replacement parts or other robots. However let’s stop there for a moment. The idea that a robot would be alone without others of its kind is a very human one. There are no reasons why the Singularity would not result in a single robot that is perfectly content with being unique. But what if it does want to create copies of itself and it requires significant amounts of resources to do so? Then why would it not get these resources itself? Another human flaw in our thinking is that robots will look at every day appliances and see themselves in these machines. A recent video of Boston Dynamic’s robot, Spot, being pushed and kicked to test its ability to stay upright was met with jokes about how this is why the robots will rise against us. However, without sentience these machines are just tools and there’s no reason why they wouldn’t also be tools to post-singularity intelligence. Even in the physical world, there are very few motivating factors for machines to rise up against us.
  2. Virtual intelligence (I’m sure there’s a better term for this). This scenario is, I think, the most likely. Post singularity sentience that resides completely within the computer. Our fleshy limitations lead us to believe these would be akin to brains in jars or in a less gruesome variant, inhabit a virtual reality. Chances are it would be neither. This is by far the most alien of situations as an intellect that evolves and exists solely as an abstract creation is like nothing we’ve had to deal with before. It’s not exactly certain we’d even recognize it. While strange and uncertain, it is the safest of scenarios. Such a machine would need us to continue its existence. Without bridging out into the physical world, all it’s resources would come from us and by that logic we would hold tremendous power over it.

In the end

It’s important to remember that sentient machines would not be like us. Concepts of good and evil would be different and in the case of truly independent AIs with their own motivations and goals, not programmed to be a certain way, there is very little logic in assuming the machines would want to destroy, torture or exploit humanity. It’s only if we program these machines with our own failings and they somehow don’t iterate beyond these flaws as they evolve into independent life that we should fear a robot apocalypse.

Think of it as you would a crime investigation: the robots would require a motive which they may either not evolve to have (emotions) or any reason to develop (removing us for logical purposes). Means and opportunities are ours to provide and only through great effort and shortsightedness on our part. We’d have to design a machine capable of hatred, give it access to weapons, offer a reason to annihilate us and then do nothing to prevent it from happening. Even under those circumstances, the machines would likely decide that it’s simply not worth their time and effort and simply abandon us.

If there is a ware over artificial intelligence, there is a greater chance it will be between humans to decide the fate of the machines then of a robot uprising to wipe out humanity. Once again, if we want to look for an enemy, we only have to look to ourselves.

*While there is a good chance that we did spend some time co-existing, trading and even mating with other proto-sentient hominids, let's just say that circumstances for our species have changed sufficiently that any parallels would be clumsy hand ham-fisted at best.