“Is this a tool we have built or a creature we have built?” – Sam Altman said a day before he got fired by the board of OpenAI

Sam Altman

Sam Altman​

CEO of OpenAI since 2019. Being the leader of one of the three most major players in the AI race, his decision making could arguably affect directly the future of the whole humanity.
If OpenAI succeeds into summoning the first AGI to the world, he could be one of the most important people that have ever lived. 
He has many times expressed concerns about existential risk and yet he has been acting in a completely inconsistent way.
Frequently accused for being disingenuous, machiavellian and manipulative, one of his most dramatic recent moments included a failed attempt for OpenAI board of directors to oust him, (exactly because they concluded they could not trust him) a decision he managed to overturn and eventually use to establish a new board of loyalists, consolidating more power and making OpenAI more “Closed-AI” than ever. 

AI will probably lead to the end of the world, but in the meantime, there will be great companies

Congress senate in May 23

The bad case is lights out for all of us

If this technology goes wrong, it can go quite wrong

People should be happy that we are a little bit scared of this

If you really believe that AI poses danger to humanity, why keep developing it

If AGI goes wrong, oh boy, hiding in a bunker wouldn’t spare anyone’s life 😂

Sam Altman: We’ll pause AI once it’s improving in ways we don’t fully understand. Also Sam Altman: It’s improving in ways we don’t fully understand.

I don’t care if we burn $50 billion a year, we’re building AGI and it’s going to be worth it

there is a possibility of AI causing the extinction of the human race

AI could pose a “risk of extinction” to humanity on the scale of nuclear war or pandemics & mitigating that risk should be a “global priority

AI could escape the lab – sci-fi scenarios are possible. We need to stare AI extinction risk in the face. People who disagree are wrong.

Altman calls for regulation of “existential risk level systems”

AGI Safety is different, the stakes are so high and the irreversible situations so easy to imagine

It’s more easy to get good behavior out of people when they are staring existential risk in the face

Lethal Intelligence Microblog

Blow your mind with the latest stories

Favorite Microbloggers

Receive important updates!

Your email will not be shared with anyone and won’t be used for any reason besides notifying you when we have important updates or new content

×