On the News

Open Letter​

Statement on AI Risk

Mitigating the risk of EXTINCTION FROM AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

At release time (On May 30, 2023), hundreds of artificial intelligence experts and other notable figures signed the statement.

The signatories included over 100 professors including the two most-cited computer scientists and Turing laureates Geoffrey Hinton and Yoshua Bengio, as well as the scientific and executive leaders of several major AI companies, and experts in pandemics, climate, nuclear disarmament, philosophy, social sciences, and other fields.

Media coverage has emphasized the signatures from several tech leaders, including Sam Altman,  Dario Amodei, Ilya Sutskever,  Bill Gates as well as more than 600 top executives and thousands of computer scientists and engineers.

Signatories

Read Other Important
Open Letters Here

FINANCIAL TIMES
13/04/2023

We must slow down the race to God-like AI

Los Angeles Times
24/09/2024

More than 125 Hollywood personas urge Gov. Newsom to sign AI safety bill

TIME
11/03/2024

Employees at Top AI Labs Fear Safety Is an Afterthought

CNBC
18/05/2024

OpenAI has dissolved its Superalignment team amid the high-profile departures of both team leaders, Ilya Sutskever and Jan Leike

YouGov
10/04/2024

About one in seven Americans is very concerned about AI ending humanity

CNN Business
12/03/2024

AI could pose ‘extinction-level’ threat to humans and US must intervene, report warns

TIME
29/03/2023

The Only Way to Deal With the Threat From AI? Shut It Down

NEW SCIENTIST
04/01/2024

There’s at least a 5% chance of AI causing humans to go extinct, say scientists

Lethal Intelligence Microblog

Blow your mind with the latest stories

Interviews and Talks

Industry Leaders and Notable Public Figures

Receive important updates!

Your email will not be shared with anyone and won’t be used for any reason besides notifying you when we have important updates or new content

×