In "Facing the Singularity" Luke Muehlhauser describes the inherent flaws of homo sapiens and the inevitability of Artificial Intelligence (AI) taking over. The future is quite bleak for us. The Earth is ours now, but maybe as soon as in 10 years we may be struggling to survive. Compared to AI we are mentally and physically inferior. Just one example:
Our axons carry signals at 75 meters per second or slower. A machine can pass signals along about 4 million times more quickly.
Luke is the director of Singularity Institute and he is researching friendly AI.
What if we cannot ensure that the AI is friendly to us?
We need to stop it. We need to introduce laws that prohibit AI and robotics research. We need to control it the same way we control chemical, biological, and nuclear weapons research and manufacturing.
And if we don't?
I can see two possibilities for us after the AI takes over:
1. It ignores us as long as we don't interfere with it, or stand in the way. Similar to how we ignore other animals. We may be treated as pets, or as pests depending on how we behave towards it. If we are hostile, it will be hostile. If we are docile, it will let us live.
2. At some point the AI may reach the conclusion that since the universe is slowly falling apart, and nobody can win the endgame, there is no point playing, and it will erase itself, like a depressed man with no future who kills himself.
No comments:
Post a Comment