Paul Christiano – Preventing an AI Takeover

Discussion with world’s leading AI safety researcher:

  • Does he regret inventing RLHF?
  • What do we want post-AGI world to look like (do we want to keep gods enslaved forever)?
  • Why he has relatively modest timelines (40% by 2040, 15% by 2030),
  • Why he’s leading the push to get to labs develop responsible scaling policies, & what it would take to prevent an AI coup or bioweapon,
  • His current research into a new proof system, and how this could solve alignment by explaining model’s behavior,
  • and much more.

AI Safety Advocates

Watch videos of experts eloquently explaining AI Risk

Industry Leaders and Notables

Videos of famous public figures openly warning about AI Risk

Original Films

Lethal Intelligence Guide and Short Stories

Channels

Creators contributing to raising AI risk awareness

Publication

Blow your mind at the frontier of AI

Categories

Stay In The Know!

Your email will not be shared with anyone and won’t be used for any reason besides notifying you when we have important updates or new content

×