Friday, April 13, 2012

Silence before the Storm


I'm scared. As a civilisation, we are moving very quickly towards our total destruction.

No, I am not talking about global warming. We can deal with that. Even if we do nothing to control the amounts of greenhouse gases in the atmosphere, and it becomes unbearably hot in some places, and sea levels rise 80 meters making countries like the Netherlands and Bangladesh disappear under water, we will survive. We will relocate to higher, cooler places. We will ban combustion engines, or more likely we will get CO2 out of the air and store it underground. Climate change will be disruptive, it will be unpleasant, but we will adjust. Countries will help each other, or will fight each other, or mayhem will rein, but our civilisation will survive. We may even be better off: we may develop technologies that allow us to control the climate globally whatever the cause, and we may embark on ambitious projects like making deserts habitable by changing local climate. All good stuff.

I am talking about something much more sinister, namely: robots and AI. We blindly keep developing technologies that are a mortal danger to us. Our latest "achievement" is Boston Dynamics' newest robot: PETMAN, or as it should be called, the Terminator, because it is ghastly similar to the original killing robot.

PETMAN's development started at least a year ago under a ruse of testing soldier uniforms. It is however the first serious example of a dangerous new weapon category: a humanoid robot. These robots will be able to use anything that was designed to be used by humans, anything that has a human interface. They will walk the stairs, open doors, move things, turn on and off switches, valves, operate heavy machinery, drive tanks, fly planes. They won't need machine guns for arms. They will be able to pick up any gun and use it. This is what makes them a new weapon category. There was no such thing in the past. We were making specialised military robots: flying drones, scouting, guarding, and combat vehicles, and now we created a robot that can control them.

Can we control it? Can we control the person who will send millions of these robots on a mission? As a civilisation we have a poor track record of stopping those in power from committing crimes against humanity. We actually have laws in place that give those in power immunity and the right to secrecy.

Combined with the latest advancements in sensory input processing and AI, examples of which are Google's self-driving cars, IBM's Watson supercomputer, and Apple's Siri, we are making a weapon much more dangerous than nuclear, chemical, or biological weapons. Those weapons were dumb. We would use them once or twice, see how terrible they were, and put them away. It may not be possible to put away a million highly intelligent PETMANs that can build, repair, and recharge themselves, know exactly were they are, access any map, and all human knowledge in milliseconds. Once let out, they may not come back.

Behind many of these technologies is DARPA, and their newest robotic challenge is probably one step too far. We could ban robotic and AI development on international level before singularity happens, but will we do it?

Post scriptum: PETMAN climbing stairs...

1 comment:

  1. Hi Tomasz,
    Nice blog have to say, but its about 20 at best and more likely 50 years before any robot apocalypse may happen. Current state of AI lacks one crucial component: vision. There are no scene understanding algorithms, based on visual input i.e. images and video, that could remotely approach human capabilities. And... currently scientists have no good idea how to solve the problem. I know it first hand as this was my research topic for a while. I mean they try but it won't happen soon.

    ReplyDelete