Monday, August 3, 2009

HAL Redux: The Threat of Intelligent Machines

Stanley Kubrick’s 1968 movie, 2001: A Space Odyssey anticipated space travel with the voyage of the spacecraft Discovery to Jupiter. Movie space travel now seems dated, but the film-makers anticipated a question that has gotten more pressing: What kind of problems might arise when very smart computers are introduced into the human environment? HAL, the conversational, artificially-intelligent, supercomputer which controls the Discovery demonstrates one possibility. HAL suffers a fault, comes unglued in a very human and artificially intelligent way, and kills all of the crew members except for Dave, the mission commander, who manages to survive and shut down the rogue computer.

HAL (while refusing to allow Dave back into the ship): “I'm sorry, Dave. I'm afraid I can't do that…. This mission is too important for me to allow you to jeopardize it…. I know that you and Frank were planning to disconnect me, and I'm afraid that's something I cannot allow to happen.”

HAL (to Dave as he proceeds with shutdown): “I know I've made some very poor decisions recently, but I can give you my complete assurance that my work will be back to normal. I've still got the greatest enthusiasm and confidence in the mission. And I want to help you.”

A little more than a decade later, in 1992, the computer scientist and science fiction writer Vernor Vinge popularized the notion of a moment called the Singularity, when smarter-than-human machines would cause rapid change that would bring about the end of the “human era.”

In 2000, William Joy, co-founder of Sun Microsystems and respected computer architect, found this vision plausible and began speaking and writing about the dark possibilities of unregulated technological advances.

Concern about the rise of intelligent machines continues, and the NY Times recently reported on a private conference, held earlier this year, which brought together a group of leading computer scientists, artificial intelligence researchers, and roboticists to discuss the need to place limits on research that might lead to loss of human control over computer-based systems. They were responding to the resilient notion that intelligent machines and a.i. systems might run amok.


Participants noted that advances in artificial intelligence have the potential to:

Create profound social disruptions

Destroy a wide range of jobs

Have dangerous consequences, given that robots that can kill autonomously will be among us soon and that there are many possibilities for criminals to exploit artificial intelligence systems


The hoped-for outcome of the conferences would be:

An assessment of the possibility of “the loss of human control of computer-based intelligences”

A look at the associated economic, legal, and ethical issues

An estimate of possible changes in human-computer relations (“Open the pod door, HAL.”)

Recommended research guidelines to avoid catastrophe, such as conducting research in secure laboratories


The report of the conference, which was organized by the Association for the Advancement of Artificial Intelligence, will be published later this year.

Scientists Worry Machines May Outsmart Man

Why the Future Doesn't Need Us, by Bill Joy

Technological Singularity