View all newsletters
Receive our newsletter - data, insights and analysis delivered to you
  1. Technology
  2. AI and automation
May 30, 2023updated 31 May 2023 12:09am

AI is an ‘extinction risk’ for humanity, say tech industry leaders

Mitigating the risks posed by the misuse of AI should be a top priority for policymakers, signatories of the letter say.

By Ryan Morrison

Some of the world’s leading artificial intelligence experts have warned that the technology poses an “extinction risk” to humanity on par with climate change and nuclear war. OpenAI’s Sam Altman was among the signatories of the open letter published by the non-profit Center for AI Safety (CAIS). In it they urge lawmakers to make mitigating the risks from AI a global priority.

The Center for AI Safety warns lawmakers and companies to take the risk from AI seriously (Photo: DedMityay/Shutterstock)
The Center for AI Safety warns lawmakers and companies to take the risk from AI seriously (Photo: DedMityay/Shutterstock)

More than 350 leading experts and industry insiders have signed the open letter including two of the three so-called “godfathers of AI” – Geoffrey Hinton and Yoshua Bengio, who, along with Meta’s Yann LeCun received the 2018 Turing Award for their work on deep learning.

Hinton left his job at Google last month to freely voice his concerns over AI development. He is particularly worried about the direction of the open source movement. During a lecture at the Cambridge University Centre for the Study of Existential Risk last week he said: “I wish I had an easy answer [for how to handle AI]. My best bet is that the companies who are developing it should be forced to put a lot of work into checking out the safety of [AI models] as they develop them. We need to get experience of these things, how might they try to escape and how they can be controlled.”

Leaders from Microsoft, OpenAI, Google, Anthropic and universities around the world signed the letter which declares: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” 

The signatories include philosophers, ethicists, legal scholars, economists as well as experts working in the AI field. CAIS says establishing the risk of extinction from future advanced AI systems is an important step in solving the problem. 

Future safety risk from advanced AI

It isn’t clear exactly what risks the world will face from future AI but a recent study from Google DeepMind predicted it could include AI taking control of nuclear weapons, mounting its own cyber attacks or helping humans with malicious intent from carrying out attacks.  Working with OpenAI and Anthropic, the company has created an early warning system that will determine if a new AI tool is at risk of those malicious use cases.

DeepMind researchers say responsible AI developers need to look beyond just the current risks and anticipate what risks might appear in the future as the models get better at thinking for themselves. “After continued progress, future general-purpose models may learn a variety of dangerous capabilities by default,” they wrote. 

Content from our partners
How businesses can safeguard themselves on the cyber frontline
How hackers’ tactics are evolving in an increasingly complex landscape
Green for go: Transforming trade in the UK

While uncertain, the team say a future AI system that isn’t properly aligned with human interests may be able to conduct offensive cyber operations, skillfully deceive humans in dialogue, manipulate humans into carrying out harmful actions, design or acquire weapons, fine-tune and operate other high-risk AI systems on cloud computing platforms.

“We need to be having the conversations that nuclear scientists were having before the creation of the atomic bomb,” said Dan Hendrycks, director of the CAIS. “As we grapple with immediate AI risks like malicious use, misinformation, and disempowerment, the AI industry and governments around the world need to also seriously confront the risk that future AIs could pose a threat to human existence.”

Government’s are already looking at ways to reduce the risks from AI now and in the future, including the risk to jobs and national security. The G7 group of nations are considering the issue as part of the “Hiroshima AI process”. This will include a series of meetings to consider intellectual property protection, disinformation and governance.

“Mitigating the risk of extinction from AI will require global action,” Hendrycks warned. “The world has successfully cooperated to mitigate risks related to nuclear war. The same level of effort is needed to address the dangers posed by future AI systems.”

Read more: UK at odds with Elon Musk on AI safety

Topics in this article :
Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how New Statesman Media Group may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.
THANK YOU