Institutes & Establishments

Join PauseAI

The group of people who are aware of AI risks is still small. You are now one of them. Your actions matter more than you think.

Courses designed by AI safety experts

Artificial intelligence could be one of the most impactful technologies developed this century. However, ensuring these systems are safe is an open problem, which encompasses a wide range of AI alignment, governance and ethics challenges.

Future of Life Institute

Steering transformative technology towards benefitting life and away from extreme large-scale risks.

Center for AI Safety

CAIS exists to equip policymakers, business leaders, and the broader world with the understanding and tools necessary to manage and reduce societal-scale risks from artificial intelligence.

Machine Intelligence Research Institute

Foundational mathematical research to ensure smarter-than-human artificial intelligence has a positive impact.

Beneficial AI Foundation

Researching how to keep AI beneficial for generations to come

Centre for the Study of Existential Risk

The Centre for the Study of Existential Risk is an interdisciplinary research centre within the Institute for Technology and Humanity at the University of Cambridge dedicated to the study and mitigation of risks that could lead to human extinction or civilisational collapse.

Less Wrong

An online forum and community dedicated to improving human reasoning and decision-making.

AISafety.com

AI Safety must be a global priority. Artificial intelligence may be our last invention. Explore how we could make it an ally rather than an adversary.

Nonlinear

There are tens of thousands of people working full time to make AI powerful, but around 300 working to make AI safe. This needs to change.

AI Safety Institute

A research organisation within the UK Government’s Department for Science, Innovation, and Technology, with the mission to equip governments with an empirical understanding of the safety of advanced AI systems.

Gladstone AI

Gladstone AI’s mission is to promote the responsible development and adoption of AI by providing safeguards against AI-driven national security threats, such as weaponization and loss of control.

Lethal Intelligence Microblog

Blow your mind with the latest stories

Interviews and Talks

Industry Leaders and Notable Public Figures

Receive important updates!

Your email will not be shared with anyone and won’t be used for any reason besides notifying you when we have important updates or new content

×