Paul Christiano​

Head of AI safety at the US Government AI Safety Institute (US AISI) since November 2023 as part of the NIST

He formerly led the language model alignment team at OpenAI where he was one of the principal architects of the most significant alignment technique breakthrough , RLHF (reinforcement learning from human feedback). 

He later resigned and became founder and head of the non-profit Alignment Research Center (ARC), which works on theoretical AI alignment and evaluations of machine learning models.

In 2023, Christiano was named as one of the TIME 100 Most Influential People in AI (TIME100 AI).

In September 2023, Christiano was appointed to the UK government’s Frontier AI Taskforce advisory board. He is also an initial trustee on Anthropic’s Long-Term Benefit Trust.

HEAD OF US AI SAFETY SAYS 50-50 CHANCE AI KILLS US ALL

Paul Christiano's Views on AI Doom

We shouldn't build conscious AIs – Paul Christiano

Can we Prevent the AI’s from Killing?

Lethal Intelligence Microblog

Blow your mind with the latest stories

Receive important updates!

Your email will not be shared with anyone and won’t be used for any reason besides notifying you when we have important updates or new content

×