Ten years ago, I found myself in a rare and privileged position: a graduate with a job and a place of my own. I was young, I had money for the first time, and I spent it with all the thrift of Tom Hanks in Big, filling my apartment with pizza boxes and remote control cars and – what I can clearly see now as a cry for help – a chimpanzee called ‘Simon’.

Most stories involving loners living with chimpanzees end with their faces being ripped off. This one ends with me, lifting up a Hawaiian shirt and unscrewing a battery compartment. Simon was a robot chimpanzee with sensors that responded to light and sound and pleaded for its plastic banana three times a day. “Simon wants banana,” it said – and drove me mad. (Or ‘bananas’, rather.)

So it’s no wonder then that technical advancements in artificial intelligence fill me with all the dread of five incarnations of The Terminator’s John Connor.

This week, I read an article on Wait But Why that turned that dread into something deeper – a realisation that artificial intelligence could spell the end of mankind.

It posits that there are three categories of artificial intelligence:

  1. Artificial Narrow Intelligence (ANI) – the kind that specialises in one area, like playing chess. It’s already out there – and able to beat the world champion, but ask it to do anything else and “it’ll look at you blankly”.
  2. Artificial General Intelligence (AGI) – a computer that is as smart as a human across the board. This is much harder to build than ANI – and we haven’t been able to do it yet. It involves “the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience,” the article quotes Professor Linda Gottfredson.
  3. Artificial Superintelligence (ASI) – this refers to AI with intellect far superior to ours in practically every field, from scientific creativity, general wisdom and social skills. It could range from being a little bit smarter than us to being trillions of times smarter. And it’s the reason Wait But Why’s article is full of words like “immortality” and “extinction”.

It’s incredibly difficult to get from ANI to AGI, but the first step is to increase computational power. According to Moore’s Law – the historically-reliable rule that the world’s maximum computing power doubles approximately every two years – we’re not far from building an affordable computer that rivals the computational power of the human brain. We’re just 10 years off, actually.

But that’s just hardware. In order to make it as smart as us, we need to somehow reverse engineer the brain, build software that imitates evolution or – and this is the scary part – give the job to the robots. In other words, build a computer whose job it is to do the research and code itself to human-level intelligence. But what if it keeps going?

You should read the article on Wait But Why. Depending on your half-full, half-empty (half-man, half-robot) attitude to life, it will either scare the megawatts out of you or give you some kind of Terminator Salvation.

Will the rise of the robots grant us immortality? Hit the tripwire to extinction? Or simply leave us the playthings of the androids – rudimentary sensors in Hawaiian shirts, pleading for bananas from our Meccano masters?

Featured image credit: Craig Coady

Advertisements