Rocket Alignment Analogy

To expect AI alignment by default is like expecting a rocket randomly fired into the sky to land exactly where we want.

Liron Shapira

Liron Shapira is an entrepreneur, angel investor, and has acted as CEO and CTO in various Software startups. A Silicon Valley success story and a father of 3, he has somehow managed in parallel to be a “consistently candid AI doom pointer-outer” (to use his words) and in fact, one of the most influential voices in the AI Safety discourse.

A “contrarian” by nature, his arguments are sharp, to the point, ultra-rational and leaving you satisfied with your conviction that the only realistic exit off the Doom train for now is the “final stop” of pausing training of the next “frontier” models.

He often says that the ideas he represents are not his own and he jokes he is a “stochastic parrot” of other Thinking Giants in the field, but that is him being too humble and in fact he has contributed multiple examples of original thought ( i.e. the Goal Completeness analogy to Turing-Completeness for AGIs, the 3 major evolutionary discontinuities on earth, and more … ).

With his constant efforts to raise awareness for the general public, using his unique no-nonsense, layman-terms style of explaining advanced ideas in a simple way, he has in fact, done more for the future trajectory of events than he will ever know…

In June 2024 he launched an awesome addicting podcast, playfully named “Doom Debates”, which keeps getting better and better, so stay tuned.

The open problem of AI Corrigibility explained by Liron Shapira

Complexity is in the eye of the beholder – by Liron Shapira

AI perceives humans as plants

Interviews and Talks

Industry Leaders and Notable Public Figures

Lethal Intelligence Microblog

Blow your mind with the latest stories

Favorite Microbloggers

Receive important updates!

Your email will not be shared with anyone and won’t be used for any reason besides notifying you when we have important updates or new content

×