Elon Musk Warns of the Existential Threat of Artificial Intelligence

elon

Elon Musk

South African-born Elon Musk, the mastermind behind SpaceX and Tesla, believes that artificial intelligence is “potentially more dangerous than nukes,” imploring all of humankind “to be super careful with AI,” unless we want the ultimate fate of humanity to closely resemble Judgment Day from the 1984 film Terminator.

Musk made his comments on Twitter, after reading Superintelligence by Nick Bostrom. The book deals with the eventual creation of a machine intelligence (artificial general intelligence, AGI) that can rival the human brain, and our fate thereafter. While most experts agree that a human-level AGI is mostly inevitable by this point — it’s just a matter of when — Bostrom contends that humanity still has a big advantage up its sleeve: we get to make the first move.

This is what Musk is referring to when he says we need to be careful with AI: We’re rapidly moving towards a Terminator-like scenario, but the actual implementation of these human-level AIs is down to us. We are the ones who will program how the AI actually works. We are the ones who can imbue the AI with a sense of ethics and morality. We are the ones who can implement safeguards, such as Asimov’s three laws of robotics, to prevent an eventual robocalypse.

The Three Laws, quoted as being from the “Handbook of Robotics, 56th Edition, 2058 A.D.”, are:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

…Hope we’re not just the biological boot loader for digital superintelligence.


aiMusk says: “Hope we’re not just the biological boot loader for digital superintelligence. Unfortunately, that is increasingly probable.” In short, if we end up building a race of superintelligent robots, we have no one but ourselves to blame — and Musk, sadly, isn’t too optimistic about humanity putting the right safeguards in place.

Musk has frequently spoken out about the potential dangers of artificial intelligence, declaring it “the most serious threat to the survival of the human race”. During an interview at the MIT AeroAstro Centennial Symposium, Musk described AI as “[humanity’s] biggest existential threat”, further stating, “I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish.” Musk described the creation of artificial intelligence as “summoning the demon”.

Will humanity’s role simply be a precursor to a superhuman-level artificial intelligence? And after the AI is up and running, will human beings be ruled superfluous to the new AI society and quickly erased?

Of all the Silicon Valley behemoths, Musk’s companies and Google must be two of the most likely to first develop human-level machine intelligence.

corner

Staff Reporter

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.