Debating the Potential Dangers of Artificial Intelligence  

Self-driving cars. Siri and Alexa. Spam filters. As artificial intelligence increasingly permeates daily activities, so do questions about the potential hazards and ethical implications that could result from the rapid advancements. Such questions were the focus of a “think tank” discussion Thursday night that considered the possible risks posed by artificial intelligence.  

It’s a timely issue. In October, Saudi Arabia granted citizenship to a robot, giving it more rights than women in the country. Technology research firm Gartner recently estimated that artificial intelligence automation will kill 1.8 million jobs by 2020 but will create 2.3 million new ones. And a former Google and Uber executive has founded Way of the Future, a religion worshipping artificial intelligence to aid what its website describes as a “peaceful and respectful transition of who is in charge of the planet from people to ‘people + machines.’ ”  

Dr Seth Baum (The Ink/Taryana Odayar)

Thursday’s event featured Dr. Seth Baum, executive director of the Global Catastrophic Risk Institute, a think tank that conducts research on the risk of large-scale events that pose threats to human civilization, such as nuclear war, global warming and emerging technologies like artificial intelligence.

In an interview with TheInk.nyc, Baum outlined scenarios in which AI could bring about global catastrophe. He said that AI could become smarter than humans and organize a takeover of the planet, for instance, or that AI embedded in weaponry or infrastructure systems like electricity, energy and transportation could malfunction because of poor design or hacking. When such disasters might occur is uncertain. “Timing estimates are anywhere from the early 2020s to never,” Baum said.  

Disasters were a central topic of the think tank, an intimate event with around 10 attendees, sponsored by Tech 2025, an organization that examines the potential impact of emerging technologies. The group views 2025 as the year when many of the world’s most disruptive technologies like artificial intelligence will have matured.

“It’s a topic that everyone should be interested in,” said Francine Mends, 36, a radiologist attending the think tank, who will soon be starting work with an artificial intelligence company. “Just as humans on this planet, we’re all invested in this issue of AI and other types of advancement in technology.”

Attendees discussed issues and concerns related to artificial intelligence at the “think tank” event.  (The Ink/Taryana Odayar)

The think tank focused on the 23 Asilomar Principles of Artificial Intelligence, which were developed at the Beneficial AI 2017 conference in Asilomar, California earlier this year. That conference drew more than 100 AI researchers and experts in economics, law, philosophy and ethics, who agreed on guidelines on the ethics, safety, transparency and values that should be upheld by those researching AI. More than 2,500 individuals have signed these principles, including two of AI’s biggest critics, Stephen Hawking and Elon Musk.

Baum, known as the “global apocalypse expert,” kicked off the evening with a presentation on “How To (and How Not to) Avoid AI Catastrophe.” He went over each of the 23 principles, explaining their merits and how they could be improved.

Following Baum’s presentation, attendees split into small groups to discuss which principles they found most problematic and how they might be improved. Several felt that the principles are too vague and open to multiple interpretations, don’t take into account different value systems and don’t have a “kill switch” for developments that are taking off too fast. Some felt that it should be the duty of researchers to monitor against the development of racist or discriminatory artificial intelligence.

Overall, attendees felt that the set of principles had room for improvement. “To me it’s a real concern,” said Michael Klein, 23, a backend engineer who writes software and is developing his own programming language called Flock. “It shouldn’t be too ambiguous, it shouldn’t be too open-ended, but we should have the kill switch.”

A main issue raised was the extent to which humans should have control over AI. Some participants argued that too much control might slow down progress. Others argued that a lack of control could result in irreversible harm. 

“I think it’s very important to have this kind of conversation about the technology that can change everything,” said Doron Etzioni, 55, who is heading an education start up. “It’s a prompt to make people think about it, to open their minds to how exactly it should work and what exactly the limitations are.”

Tech 2025 founder Charlie Oliver (The Ink/Taryana Odayar)

Tech 2025’s founder Charlie Oliver said she will pass on feedback from attendees to the Future of Life Institute, a volunteer-run research organization that aims to alleviate existential risks facing humanity, especially risks from advanced AI.   

“Whether it be blockchain or AI, machine-learning or autonomous vehicles, the question always comes: when is it going to be at its peak, when is it going to be ready, when is it going to hit the mainstream,” Oliver said at the event. She believes that artificial intelligence will peak within the next ten years, or around 2025. “That’s going to be a crucial time. And that’s where Tech 2025 is from.”