CEO of OpenAI since 2019. Being the leader of one of the three most major players in the AI race, his decision making could arguably affect directly the future of the whole humanity.
If OpenAI succeeds into summoning the first AGI to the world, he could be one of the most important people that have ever lived.
He has many times expressed concerns about existential risk and yet he has been acting in a completely inconsistent way.
Frequently accused for being disingenuous, machiavellian and manipulative, one of his most dramatic recent moments included a failed attempt for OpenAI board of directors to oust him, (exactly because they concluded they could not trust him) a decision he managed to overturn and eventually use to establish a new board of loyalists, consolidating more power and making OpenAI more “Closed-AI” than ever.
Sam Altman: We’ll pause AI once it’s improving in ways we don’t fully understand. Also Sam Altman: It’s improving in ways we don’t fully understand.
AI could pose a "risk of extinction" to humanity on the scale of nuclear war or pandemics & mitigating that risk should be a "global priority
"Is this a tool we have built or a creature we have built?" - Sam Altman said a day before he got fired by the board of OpenAI
AI could escape the lab - sci-fi scenarios are possible. We need to stare AI extinction risk in the face. People who disagree are wrong.
AGI Safety is different, the stakes are so high and the irreversible situations so easy to imagine