Elon Musk and Stephen Hawking warn of artificial intelligence arms race

Published by rudy Date posted on January 31, 2017

The pair joined prominent researchers in pledging support for principles to protect mankind from machines.

BY ANTHONY CUTHBERTSON, Jan 31, 2017

Stephen Hawking and Elon Musk have joined prominent artificial intelligence researchers in pledging support for principles to protect mankind from machines and a potential AI arms race.

An open letter published by the Future of Life Institute (FLI) on Monday outlined the Asilomar AI Principles—23 guidelines to ensure the development of artificial intelligence that is beneficial to humanity.

For decades, science fiction writer Isaac Asimov’s ‘Three Laws of Robotics’ were a cornerstone for the ethical development of robots and artificial intelligence machines. First laid out in his 1942 short story Runaround, Asimov’s three principles stated: A robot must not harm a human through action or inaction; a robot must obey humans; and a robot must protect its own existence. Each rule takes precedence over the rules that follow it in order to ensure a human’s life is protected over the existence of a robot.

A robot toy is seen at the Bosnian War Childhood museum exhibition in Zenica, Bosnia and Herzegovina, June 21, 2016. AI researchers united to design principles to keep robots working for rather than against human interests.

Robotic and AI ethicists have argued that these rules are a good starting point but are too simplistic for the the 21st Century. A 2009 paper published in the International Journal of Social Robotics suggested the sophistication of computers and their increasing integration into our lives mean better guidelines are needed.

The Asilomar AI Principles follow previous open letters on AI safety and autonomous weapons and has already been signed by more than 700 artificial intelligence and robotics researchers. The principles call for shared responsibility to ensure shared prosperity and caution against an artificial intelligence arms race.

“I’m not a fan of wars, and I think it could be extremely dangerous,” said Stefano Ermon from the Department of Computer Science at Stanford University, who was among the signatories. “Obviously I think that the technology has a huge potential and, even just with the capabilities we have today, it’s not hard to imagine how it could be used in very harmful ways.”

Tesla CEO Elon Musk has previously said that Google is the “only one” he is worried about when it comes to the development of advanced artificial intelligence. Nick Bostrom, a philosophy professor at Oxford University and founding director of the Future of Humanity Institute, warned last year that Google is leading the way in the global race to create human-level AI.

Both Musk and Bostrom will therefore be pleased that the founder of DeepMind—Google’s AI flag bearer—was among the names pledging support for the principles. Demis Hassabis is considered one of the leading minds in the field of artificial intelligence and his company has previously collaborated with the FLI in proposing an off switch for rogue AI.

In a paper titled Safety Interruptible Agents, a “big red button” was outlined for preventing advanced machines from ignoring turn-off commands and becoming out of human control. This idea is reiterated through the Asilomar AI Principles, which calls for AI systems to be “subject to strict safety and control measures.”

The collection of principles provide far more detailed guidelines than Asimov’s three rules but the FLI recognizes they are “by no means comprehensive.”

“It’s certainly open to differing interpretations, but it also highlights how the current “default” behavior around many relevant issues could violate principles that most participants agreed are important to uphold,” a spokesperson for the FLI said in a statement emailed to Newsweek.

“We hope that these principles will provide material for vigorous discussion and also aspirational goals for how the power of AI can be used to improve everyone’s lives in coming years.”

April – Month of Planet Earth

“Full speed to renewables!”

 

Continuing
Solidarity with CTU Myanmar,
trade unions around the world,
for democracy in Myanmar,
with the daily protests of
people in Myanmar against
the military coup and
continuing oppression.

 

Accept National Unity Government
(NUG) of Myanmar.
Reject Military!

#WearMask #WashHands
#Distancing
#TakePicturesVideos

Time to support & empower survivors.
Time to spark a global conversation.
Time for #GenerationEquality to #orangetheworld!
Trade Union Solidarity Campaigns
Get Email from NTUC
Article Categories