Blog

September 11, 2015
Why We’re Not Ready To Let The Robots Take Over

When entrepreneurs like PayPal founder Peter Thiel or Tesla Motors’ Elon Musk speak, many others listen due to their proven ability to think outside the box. So as both have recently commented on their fear of artificial intelligence, it might’ve caught several people off-guard. Particularly since we so often see the “rosy” view of the future in all the advantages we’ll have once robots can function independently without our instruction.

It’s a future that has many positives, but Thiel and Musk urge caution – and in my view, common sense. They have called for a formal ban on self-guided lethal force – in other words, any device that is capable of exerting lethal force that operates fully autonomously.

I’m sure people will split hairs on this definition of “autonomous” (“Well, what if I give the device instructions 30 seconds before it does something? Does that make it truly autonomous?”). Still, the idea is very interesting in that, in our present day, we have no way to prove that no harm will come from autonomous lethal force devices. In Thiel and Musk’s opinion, until we can prove they’re harmless, we should ban them for the time being. We would have to insist on direct human interaction on any and all of these types of devices.

It’s a new frontier, for sure. While not exactly the same, there was a story recently about how a person was hurt by their self-parking Volvo running over them. Was it the car’s fault? Not according to Volvo because the car’s owner didn’t buy the sensor package so that the car could identify that there was a person in its way.

We as humans still need to be involved in the process to help computers and robots make sense of the world. That said, even with well over a million robots operating in various settings, we hear so few stories of a robot injuring a worker.

To Thiel and Musk’s point, however, it would be foolish to assume self-autonomous robots will have no issues. So their suggestion on a ban of such artificial intelligence makes a lot of sense at this point, especially as the very definition of “robot” is shifting.

For example, we’ve always thought of robots as a physical presence. But companies like Google and Apple have designs on providing robots working for us that are software-based and give us information before we even ask for it. Think of it like a personal assistant that reminds you of things in your day or provides suggestions (“I see that you’re going to the airport today. The traffic on the route to the airport is heavy so you should leave early. I also have real-time information on your flight, which will be delayed by 30 minutes.”).

In the future, robots could describe any kind of autonomous service that does something for you and gets its original intent from a human but continually adjusts for your benefit based on this knowledge.

Ultimately, my suspicion is that there has to be as closest to a perfect loop as possible whereby any exertion of lethal force must be directly attributed to an individual’s input. We’re simply not ready for non-human intelligence that can be freewheeling, particularly the type that has lethal force capabilities.

It’s not the stuff of some futuristic science fiction. As robots become all the more present in our lives and in various forms, it’s probably very smart that we humans still have the final say.

Where do you fall on the spectrum of this debate?