Liron Shapira is an entrepreneur, angel investor, and has acted as CEO and CTO in various Software startups. A Silicon Valley success story and a father of 3, he has somehow managed in parallel to be a “consistently candid AI doom pointer-outer” (to use his words) and in fact, one of the most influential voices in the AI Safety discourse.

A “contrarian” by nature, his arguments are sharp, to the point, ultra-rational and leaving you satisfied with your conviction that the only realistic exit off the Doom train for now is the “final stop” of pausing training of the next “frontier” models.

He often says that the ideas he represents are not his own and he jokes he is a “stochastic parrot” of other Thinking Giants in the field, but that is him being too humble and in fact he has contributed multiple examples of original thought ( i.e. the Goal Completeness analogy to Turing-Completeness for AGIs, the 3 major evolutionary discontinuities on earth, and more … ).

With his constant efforts to raise awareness for the general public, using his unique no-nonsense, layman-terms style of explaining advanced ideas in a simple way, he has in fact, done more for the future trajectory of events than he will ever know…

In June 2024 he launched an awesome addicting podcast, playfully named “Doom Debates”, which keeps getting better and better, so stay tuned.

The open problem of AI Corrigibility explained by Liron Shapira

Complexity is in the eye of the beholder – by Liron Shapira

AI perceives humans as plants

Rocket Alignment Analogy

In Defense of AI Doomerism | Robert Wright & Liron Shapira

May 16, 2024 8:46 pm

Subscribe to The Nonzero Newsletter at https://nonzero.substack.com
Exclusive Overtime discussion at: https://nonzero.substack.com/p/in-defense-of-ai-doomerism-robert

0:00 Why this pod’s a little odd
2:26 Ilya Sutskever and Jan Leike quit OpenAI—part of a larger pattern?
9:56 Bob: AI doomers need Hollywood
16:02 Does an AI arms race spell doom for alignment?
20:16 Why the “Pause AI” movement matters
24:30 AI doomerism and Don’t Look Up: compare and contrast
26:59 How Liron (fore)sees AI doom
32:54 Are Sam Altman’s concerns about AI safety sincere?
39:22 Paperclip maximizing, evolution, and the AI will to power question
51:10 Are there real-world examples of AI going rogue?
1:06:48 Should we really align AI to human values?
1:15:03 Heading to Overtime

Discussed in Overtime:
Anthropic vs OpenAI.
To survive an AI takeover… be like gut bacteria?
The Darwinian differences between humans and AI.
Should we treat AI like nuclear weapons?
Open source AI, China, and Cold War II.
Why time may be running out for an AI treaty.
How AI agents work (and don't).
GPT-5: evolution or revolution?
The thing that led Liron to AI doom.

Robert Wright (Nonzero, The Evolution of God, Why Buddhism Is True) and Liron Shapira (Pause AI, Relationship Hero). Recorded May 06, 2024. Additional segment recorded May 15, 2024.

Twitter: https://twitter.com/NonzeroPods
...

Getting ARRESTED for barricading OpenAI's office to Stop AI — Sam Kirchner and Remmelt Ellen

October 5, 2024 1:17 am

Sam Kirchner and Remmelt Ellen, leaders of the Stop AI movement, think the only way to effectively protest superintelligent AI development is with civil disobedience.

Not only are they staging regular protests in front of AI labs, they’re barricading the entrances and blocking traffic, then allowing themselves to be repeatedly arrested.

Is civil disobedience the right strategy to stop AI?


00:00 Introducing Stop AI
00:38 Arrested at OpenAI Headquarters
01:14 Stop AI’s Funding
01:26 Blocking Entrances Strategy
03:12 Protest Logistics and Arrest
08:13 Blocking Traffic
12:52 Arrest and Legal Consequences
18:31 Commitment to Nonviolence
21:17 A Day in the Life of a Protestor
21:38 Civil Disobedience
25:29 Planning the Next Protest
28:09 Stop AI Goals and Strategies
34:27 The Ethics and Impact of AI Protests
42:20 Call to Action

Show Notes
StopAI's next protest is on October 21, 2024 at OpenAI, 575 Florida St, San Francisco, CA 94110.

StopAI Website: https://StopAI.info
StopAI Discord: https://discord.gg/gbqGUt7ZN4

Disclaimer: I (Liron) am not part of StopAI, but I am a member of PauseAI, which also has a website and Discord you can join.

PauseAI Website: https://pauseai.info
PauseAI Discord: https://discord.gg/2XXWXvErfA
There's also a special #doom-debates channel in the PauseAI Discord just for us :)

Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at https://DoomDebates.com and to https://youtube.com/@DoomDebates
...

Liron Shapira on Superintelligence Goals

April 19, 2024 4:29 pm

Liron Shapira joins the podcast to discuss superintelligence goals, what makes AI different from other technologies, risks from centralizing power, and whether AI can defend us from AI.

Timestamps:
00:00 Intelligence as optimization-power
05:18 Will LLMs imitate human values?
07:15 Why would AI develop dangerous goals?
09:55 Goal-completeness
12:53 Alignment to which values?
22:12 Is AI just another technology?
31:20 What is FOOM?
38:59 Risks from centralized power
49:18 Can AI defend us against AI?
56:28 An Apollo program for AI safety
01:04:49 Do we only have one chance?
01:07:34 Are we living in a crucial time?
01:16:52 Would superintelligence be fragile?
01:21:42 Would human-inspired AI be safe?
...

Liron Shapira on the Case for Pausing AI

March 1, 2024 3:00 pm

This week on Upstream, Erik is joined by Liron Shapira to discuss the case against further AI development, why Effective Altruism doesn’t deserve its reputation, and what is misunderstood about nuclear weapons. Upstream is sponsored by Brave: Head to https://brave.com/brave-ads/ and mention “MoZ” when signing up for a 25% discount on your first campaign.
--
RECOMMENDED PODCAST: @History102-qg5oj with @WhatifAltHist
Every week, creator of WhatifAltHist Rudyard Lynch and Erik Torenberg cover a major topic in history in depth -- in under an hour. This season will cover classical Greece, early America, the Vikings, medieval Islam, ancient China, the fall of the Roman Empire, and more. Subscribe on
Spotify: https://open.spotify.com/show/36Kqo3BMMUBGTDo1IEYihm
Apple: https://podcasts.apple.com/us/podcast/history-102-with-whatifalthists-rudyard-lynch-and/id1730633913
--
We’re hiring across the board at Turpentine and for Erik’s personal team on other projects he’s incubating. He’s hiring a Chief of Staff, EA, Head of Special Projects, Investment Associate, and more. For a list of JDs, check out: https://eriktorenberg.com.
--
SPONSOR: BRAVE
Get first-party targeting with Brave’s private ad platform: cookieless and future proof ad formats for all your business needs. Performance meets privacy. Head to https://brave.com/brave-ads/ and mention “MoZ” when signing up for a 25% discount on your first campaign.
--
LINKS
Pause AI: https://pauseai.info/
--
X / TWITTER:
@liron (Liron)
@eriktorenberg (Erik)
@upstream__pod
@turpentinemedia
--
TIMESTAMPS:
(00:00) Intro and Liron's Background
(01:08) Liron's Thoughts on the e/acc Perspective
(03:59) Why Liron Doesn't Want AI to Take Over the World
(06:02) AI and the Future of Humanity
(10:40) AI is An Existential Threat to Humanity
(14:58) On Robin Hanson's Grabby Aliens Theory
(17:22 ) Sponsor - Brave
(18:20 ) AI as an Existential Threat: A Debate
(23:01) AI and the Potential for Global Coordination
(27:03) Liron's Reaction on Vitalik Buterin's Perspective on AI and the Future
(31:16) Power Balance in Warfare: Defense vs Offense
(32:20) Nuclear Proliferation in Modern Society
(38:19) Why There's a Need for a Pause in AI Development
(43:57) Is There Evidence of AI Being Bad?
(44:57) Liron On George Hotz's Perspective
(49:17) Timeframe Between Extinction
(50:53) Humans Are Like Housecats Or White Blood Cells
(53:11) The Doomer Argument
(01:00:00 )The Role of Effective Altruism in Society
(01:03:12) Wrap
--
Upstream is a production from Turpentine
Producer: Sam Kaufman
Editor: Eul Jose Lacierda

For guest or sponsorship inquiries please contact [email protected]

Music license:
VEEBHLBACCMNCGEK
...

Can GPT o1 Reason? | Liron Reacts to Tim Scarfe & Keith Duggar

September 18, 2024 4:06 am

How smart is OpenAI’s new model, o1? What does "reasoning" ACTUALLY mean? What do computability theory and complexity theory tell us about the limitations of LLMs?

Dr. Tim Scarf and Dr. Keith Duggar, hosts of the popular Machine Learning Street Talk podcast, posted an interesting video discussing these issues… FOR ME TO DISAGREE WITH!!!

00:00 Introduction
02:14 Computability Theory
03:40 Turing Machines
07:04 Complexity Theory and AI
23:47 Reasoning
44:24 o1
47:00 Finding gold in the Sahara
56:20 Self-Supervised Learning and Chain of Thought
01:04:01 The Miracle of AI Optimization
01:23:57 Collective Intelligence
01:25:54 The Argument Against LLMs' Reasoning
01:49:29 The Swiss Cheese Metaphor for AI Knowledge
02:02:37 Final Thoughts

Original source: https://www.youtube.com/watch?v=nO6sDk6vO0g

Follow Machine Learning Street Talk: https://www.youtube.com/@MachineLearningStreetTalk


Doom Debates Substack: https://DoomDebates.com

^^^ Seriously subscribe to this! ^^^
...

Arvind Narayanan Makes AI Sound Normal | Liron Reacts

August 29, 2024 11:26 am

Today I’m reacting to the 20VC podcast with Harry Stebbings and Princeton professor Arvind Narayanan: https://www.youtube.com/watch?v=8CvjVAyB4O4

Prof. Narayanan is known for his critical perspective on the misuse and over-hype of artificial intelligence, which he often refers to as “AI snake oil”. Narayanan’s critiques aim to highlight the gap between what AI can realistically achieve, and the often misleading promises made by companies and researchers.

I analyze Arvind’s takes on the comparative dangers of AI and nuclear weapons, the limitations of current AI models, and AI’s trajectory toward being a commodity rather than a superintelligent god.

00:00 Introduction

01:21 Arvind’s Perspective on AI

02:07 Debating AI's Compute and Performance

03:59 Synthetic Data vs. Real Data

05:59 The Role of Compute in AI Advancement

07:30 Challenges in AI Predictions

26:30 AI in Organizations and Tacit Knowledge

33:32 The Future of AI: Exponential Growth or Plateau?

36:26 Relevance of Benchmarks

39:02 AGI

40:59 Historical Predictions

46:28 OpenAI vs. Anthropic

52:13 Regulating AI

56:12 AI as a Weapon

01:02:43 Sci-Fi

01:07:28 Conclusion

Follow Arvind Narayanan: https://x.com/random_walker

Follow Harry Stebbings: https://x.com/HarryStebbings

Join the conversation at https://DoomDebates.com or https://youtube.com/@DoomDebates, suggest topics or guests, and help us spread awareness about the urgent risk of AI extinction. Thanks for watching.
...

Episode #44: “AI P-Doom Debate: 50% vs 99.999%” For Humanity: An AI Risk Podcast

September 4, 2024 3:06 pm

In Episode #44, host John Sherman brings back friends of For Humanity Dr. Roman Yamopolskiy and Liron Shapira. Roman is an influential AI safety researcher, thought leader, and Associate Professor at the University of Louisville. Liron is a tech CEO and host of the excellent Doom Debates podcast. Roman famously holds a 99.999% p-doom, Liron has a nuanced 50%. John starts out at 75%, unrelated to their numbers. Where are you? Did Roman or Liron move you in their direction at all? Let us know in the comments!

LEARN HOW TO HELP RAISE AI RISK AWARENESS IN YOUR COMMUNITY HERE
https://pauseai.info/local-organizing

Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast

EMAIL JOHN: [email protected]

This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.

For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.

Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

RESOURCES:

BUY ROMAN’S NEW BOOK ON AMAZON
https://a.co/d/fPG6lOB

SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
https://www.youtube.com/@DoomDebates

JOIN THE FIGHT, help Pause AI!!!!
Pause AI

Join the Pause AI Weekly Discord Thursdays at 2pm EST
/ discord
https://discord.com/invite/pVMWjddaW7

Max Winga’s “A Stark Warning About Extinction”
https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22

For Humanity Theme Music by Josef Ebner
Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg
Website: https://josef.pictures

BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom

22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
https://www.safe.ai/work/statement-on-ai-risk

Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes
...

David Shapiro Part II: Unaligned Superintelligence Is Totally Fine?

August 22, 2024 10:13 am

Today I’m reacting to David Shapiro’s response to my previous episode: https://www.youtube.com/watch?v=vZhK43kMCeM

And also to David’s latest episode with poker champion & effective altruist Igor Kurganov: https://www.youtube.com/watch?v=XUZ4P3e2iaA

I challenge David's optimistic stance on superintelligent AI inherently aligning with human values. We touch on factors like instrumental convergence and resource competition. David and I continue to clash over whether we should pause AI development to mitigate potential catastrophic risks. I also respond to David's critiques of AI safety advocates.

00:00 Introduction
01:08 David's Response and Engagement
03:02 The Corrigibility Problem
05:38 Nirvana Fallacy
10:57 Prophecy and Faith-Based Assertions
22:47 AI Coexistence with Humanity
35:17 Does Curiosity Make AI Value Humans?
38:56 Instrumental Convergence and AI's Goals
46:14 The Fermi Paradox and AI's Expansion
51:51 The Future of Human and AI Coexistence
01:04:56 Concluding Thoughts

Join the conversation on https://DoomDebates.com or https://youtube.com/@DoomDebates, suggest topics or guests, and help us spread awareness about the urgent risk of extinction. Thanks for watching.
...

Liron Reacts to Mike Israetel's "Solving the AI Alignment Problem"

July 18, 2024 10:56 am

Can a guy who can kick my ass physically also do it intellectually?

Dr. Mike Israetel is a well-known bodybuilder and fitness influencer with over 600,000 Instagram followers, and a surprisingly intelligent commentator on other subjects, including a whole recent episode on the AI alignment problem:

https://www.youtube.com/watch?v=PqJe-O7yM3g

Mike brought up many interesting points that were worth responding to, making for an interesting reaction episode. I also appreciate that he’s helping get the urgent topic of AI alignment in front of a mainstream audience.

Unfortunately, Mike doesn’t engage with the possibility that AI alignment is an intractable technical problem on a 5-20 year timeframe, which I think is more likely than not. That’s the crux of why he and I disagree, and why I see most of his episode as talking past most other intelligent positions people take on AI alignment. I hope he’ll keep engaging with the topic and rethink his position.

00:00 Introduction
03:08 AI Risks and Scenarios
06:42 Superintelligence Arms Race
12:39 The Importance of AI Alignment
18:10 Challenges in Defining Human Values
26:11 The Outer and Inner Alignment Problems
44:00 Transhumanism and AI's Potential
45:42 The Next Step In Evolution
47:54 AI Alignment and Potential Catastrophes
50:48 Scenarios of AI Development
54:03 The AI Alignment Problem
01:07:39 AI as a Helper System
01:08:53 Corporations and AI Development
01:10:19 The Risk of Unaligned AI
01:27:18 Building a Superintelligent AI
01:30:57 Conclusion

Follow Mike Israetel:
https://youtube.com/@MikeIsraetelMakingProgress
https://instagram.com/drmikeisraetel

Get the full Doom Debates experience:
1. Subscribe to this channel: https://youtube.com/@DoomDebates
2. Subscribe to my Substack: https://DoomDebates.com
3. Search "Doom Debates" to subscribe in your podcast player
4. Follow me at https://x.com/liron
...

"The default outcome is... we all DIE" | Liron Shapira on AI risk

July 25, 2023 9:58 am

The full episode of episode six of the Complete Tech Heads podcast, with Liron Shapira, founder, technologist, and self-styled AI doom pointer-outer.

Includes an intro to AI risk, thoughts on a new tier of intelligence, a variety of rebuttals to Marc Andreesen's recent essay on AI, thoughts on how AI might plausibly take over and kill all humans, the rise and danger of AI girlfriends, Open AI's new super alignment team, Elon Musk's latest AI safety venture XAI, and other topics.

#technews #ai #airisks
...

"AI Risk=Jenga" For Humanity, An AI Safety Podcast Episode #17, Liron Shapira Interview

February 28, 2024 3:51 pm

In Episode #17, AI Risk + Jenga, Liron Shapira Interview, John talks with tech CEO and AI Risk Activist Liron Shapira about a broad range of AI risk topics centered around existential risk. Liron likens AI Risk to a game of Jenga, where there are a finite number of pieces, and each one you pull out leaves you one closer to collapse. He says something like Sora, seemingly just a video innovation, could actually end all life on earth.

This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.

For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.

Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

Resources:

PAUSE AI DISCORD
https://discord.gg/pVMWjddaW7

Liron's Youtube Channel:
https://youtube.com/@liron00?si=cqIo5DUPAzHkmdkR

More on rationalism:
https://www.lesswrong.com/

More on California State Senate Bill SB-1047:
https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202320240SB1047&utm_source=substack&utm_medium=email

https://thezvi.substack.com/p/on-the-proposed-california-sb-1047?utm_source=substack&utm_medium=email

Warren Wolf
Warren Wolf, "Señor Mouse" - The Checkout: Live at Berklee
https://youtu.be/OZDwzBnn6uc?si=o5BjlRwfy7yuIRCL
...

AI Doom Debate - Liron Shapira vs. Mikael Koivukangas

May 16, 2024 3:17 am

Mikael thinks the doom argument is loony because he doesn't see computers as being able to have human-like agency any time soon.

I attempted to understand his position and see if I could move him toward a higher P(doom).
...

Toy Model of the AI Control Problem

April 1, 2024 7:04 pm

Slides by Jaan Tallinn
Voiceover explanation by Liron Shapira

Would a superintelligent AI have a survival instinct?
Would it intentionally deceive us?
Would it murder us?

Doomers who warn about these possibilities often get accused of having "no evidence", or just "anthropomorphizing". It's understandable why people could assume that, because superintelligent AI acting on the physical world is such a complex topic, and they're confused about it themselves.

So instead of Artificial Superintelligence (ASI), let's analyze a simpler toy model that leaves no room for anthropomorphism to creep in: an AI that's simply a brute-force search algorithm over actions in a simple gridworld.

Why does the simplest AI imaginable, when you ask it to help you push a box around a grid, suddenly want you to die? ☠️

This toy model will help you understand why a drive to eliminate humans is *not* a handwavy anthropomorphic speculation, but something we expect by default from any sufficiently powerful search algorithm.
...

#10: Liron Shapira - AI doom, FOOM, rationalism, and crypto

December 26, 2023 3:09 am

Liron Shapira is an entrepreneur, angel investor, and CEO of counseling startup Relationship Hero. He’s also a rationalist, advisor for the Machine Intelligence Research Institute and Center for Applied Rationality, and a consistently candid AI doom pointer-outer.
- Liron’s Twitter: https://twitter.com/liron
- Liron’s Substack: https://lironshapira.substack.com/
- Liron’s old blog, Bloated MVP: https://www.bloatedmvp.com

TJP LINKS:
- TRANSCRIPT: https://www.theojaffee.com/p/10-liron-shapira
- Spotify:
- Apple Podcasts:
- RSS: https://api.substack.com/feed/podcast/989123/s/75569/private/129f6344-c459-4581-a9da-dc331677c2f6.rss
- Playlist of all episodes: https://www.youtube.com/playlist?list=PLVN8-zhbMh9YnOGVRT9m0xzqTNGD_sujj
- My Twitter: https://x.com/theojaffee
- My Substack: https://www.theojaffee.com

CHAPTERS:
Intro (0:00)
Non-AI x-risks (0:53)
AI non-x-risks (3:00)
p(doom) (5:21)
Liron vs. Eliezer (12:18)
Why might doom not happen? (15:42)
Elon Musk and AGI (17:12)
Alignment vs. Governance (20:24)
Scott Alexander lowering p(doom) (22:32)
Human minds vs ASI minds (28:01)
Vitalik Buterin and d/acc (33:30)
Carefully bootstrapped alignment (35:22)
GPT vs AlphaZero (41:55)
Belrose & Pope AI Optimism (43:17)
AI doom meets daily life (57:57)
Israel vs. Hamas (1:02:17)
Rationalism (1:06:15)
Crypto (1:14:50)
Charlie Munger and Richard Feynman (1:22:12)
...

Liron reacts to "Intelligence Is Not Enough" by Bryan Cantrill

December 12, 2023 6:04 pm

Bryan Cantrill claims "intelligence isn't enough" for engineering complex systems in the real world.

I wasn't moved by his arguments, but I think they're worth a look, and I appreciate smart people engaging in this discourse.

Bryan's talk: https://www.youtube.com/watch?v=bQfJi7rjuEk
...

Liron Shapira - a conversation about conversations about AI

September 22, 2023 2:33 am

Liron Shapira, tech entrepeneur and angel investor, is also a vocal activist for AI safety. He has engaged in several lively debates on the topic, including with George Hotz and also an online group that calls themselves the "Effective Accelerationists", both of whom disagree with the idea of AI becoming extremely dangerous in the foreseeable future.

In this interview, we discuss hopes and worries regarding the state of AI safety, debate as a means of social change, and what is needed to elevate the discourse on AI.

Liron's debate with George Hotz: https://www.youtube.com/watch?v=lt4vR6XQk-o
Liron's debate with "Beff Jezos" (of e/acc): https://www.youtube.com/watch?v=f71yn1j5Uyc

Alignment Workshop: https://www.youtube.com/@AlignmentWorkshop (referenced at 6:00)
...

There’s No Off Button: AI Existential Risk Interview with Liron Shapira

September 21, 2023 7:51 pm

Liron Shapira is a rationalist, startup founder and angel investor. He studied theoretical Computer Science at UC Berkeley. Since 2007 he's been closely following AI existential risk research through his association with the Machine Intelligence Research Institute and LessWrong.
Computerphile (Rob Miles Channel): https://www.youtube.com/watch?v=3TYT1QfdfsM
...

AI Foom Debate: Liron Shapira vs. Beff Jezos (e/acc) on Sep 1, 2023

September 7, 2023 11:21 pm

My debate from an X Space on Sep 1, 2023 hosted by Chris Prucha ...

AI Doom Debate: Liron Shapira vs. Alexander Campbell

August 5, 2023 6:32 am

What's a goal-to-action mapper? How powerful can it be?

How much do Gödel's Theorem & Halting Problem limit AI's powers?

How do we operationalize a ban on dangerous AI that doesn't also ban other tech like smartphones?
...

Web3, AI & Cybersecurity with Liron Shapira

April 6, 2023 9:46 pm

In this episode of the AdQuick Madvertising podcast, Adam Singer interviews Liron Shapira to talk Web 3 mania, cybersecurity, and go deep into AI existential and business risks and opportunities.

Follow Liron: https://twitter.com/liron

Follow AdQuick
Twitter: https://twitter.com/adquick
LinkedIn: https://linkedin.com/company/adquick
Visit http://adquick.com to get started telling the world your story

Listen on Spotify
https://open.spotify.com/show/03FnBsaXiB1nUsEaIeYr4d

Listen on Apple Podcasts:
https://podcasts.apple.com/us/podcast/adquick-madvertising-podcast/id1670723215

Folow the hosts:
Chris Gadek Twitter: https://twitter.com/dappermarketer
Adam Singer Twitter: https://twitter.com/adamsinger
...

Open-Source AGI = Human Extinction? Debate with $85M Backed AI Founder

May 22, 2025 11:13 pm

Dr. Himanshu Tyagi is a professor of engineering at the Indian Institute of Science and the co-founder of Sentient, an open-source AI platform that raised $85M in funding led by Founders Fund.

In this conversation, Himanshu gives me Sentient’s pitch. Then we debate whether open-sourcing frontier AGI development is a good idea, or a reckless way to raise humanity’s P(doom).

00:00 Introducing Himanshu Tyagi
01:41 Sentient’s Vision
05:20 How’d You Raise $85M?
11:19 Comparing Sentient to Competitors
27:26 Open Source vs. Closed Source AI
43:01 What’s Your P(Doom)™
48:44 Extinction from Superintelligent AI
54:02 AI's Control Over Digital and Physical Assets
01:00:26 AI's Influence on Human Movements
01:08:46 Recapping the Debate
01:13:17 Liron’s Announcements

Show Notes

Himanshu’s Twitter — https://x.com/hstyagi
Sentient’s website — https://sentient.foundation

---

Come to the Less Online conference on May 30 - Jun 1, 2025:
https://less.online
Hope to see you there!

If Anyone Builds It, Everyone Dies by Eliezer Yudkowsky and Nate Soares —
https://ifanyonebuildsit.com

Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligence

PauseAI, the volunteer organization I’m part of: https://pauseai.info

Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!

---

Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates
...

Emergency Episode: John Sherman FIRED from Center for AI Safety

May 21, 2025 9:49 am

My friend John Sherman from the For Humanity podcast (@ForHumanityAIRisk) got hired by the Center for AI Safety (CAIS) two weeks ago.

Today I suddenly learned he’s been fired.

I’m frustrated by this decision, and frustrated with the whole AI x-risk community’s weak messaging.

---

Come to the Less Online conference on May 30 - Jun 1, 2025: https://less.online
Hope to see you there!

Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligence

PauseAI, the volunteer organization I’m part of: https://pauseai.info

Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!

---

Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates
...

Gary Marcus vs. Liron Shapira — AI Doom Debate

May 15, 2025 11:55 am

Prof. Gary Marcus is a scientist, bestselling author and entrepreneur, well known as one of the most influential voices in AI. He is Professor Emeritus of Psychology and Neuroscience at NYU.  He was founder and CEO of Geometric Intelligence, a machine learning company acquired by Uber in 2016.

Gary co-authored the 2019 book, Rebooting AI: Building Artificial Intelligence We Can Trust, and the 2024 book, Taming Silicon Valley: How We Can Ensure That AI Works for Us. He played an important role in the 2023 Senate Judiciary Subcommittee Hearing on Oversight of AI, testifying with Sam Altman.

In this episode, Gary and I have a lively debate about whether P(doom) is approximately 50%, or if it’s less than 1%!

00:00 Introducing Gary Marcus
02:33 Gary’s AI Skepticism
09:08 The Human Brain is a Kluge
23:16 The 2023 Senate Judiciary Subcommittee Hearing
28:46 What’s Your P(Doom)™
44:27 AI Timelines
51:03 Is Superintelligence Real?
01:00:35 Humanity’s Immune System
01:12:46 Potential for Recursive Self-Improvement
01:26:12 AI Catastrophe Scenarios
01:34:09 Defining AI Agency
01:37:43 Gary’s AI Predictions
01:44:13 The NYTimes Obituary Test
01:51:11 Recap and Final Thoughts
01:53:35 Liron’s Outro
01:55:34 Eliezer Yudkowsky’s New Book!
01:59:49 AI Doom Concept of the Day


Show Notes

Gary’s Substack — https://garymarcus.substack.com
Gary’s Twitter — https://x.com/garymarcus

If Anyone Builds It, Everyone Dies by Eliezer Yudkowsky and Nate Soares — https://ifanyonebuildsit.com

---

Come to the Less Online conference on May 30 - Jun 1, 2025: https://less.online

Hope to see you there!

---

Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligence

PauseAI, the volunteer organization I’m part of: https://pauseai.info

Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!

---

Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates
...

Mike Israetel vs. Liron Shapira — AI Doom Debate

May 8, 2025 10:57 am

Dr. Mike Israetel, renowned exercise scientist and social media personality, and more recently a low-P(doom) AI futurist, graciously offered to debate me!

00:00 Introducing Mike Israetel
12:19 What’s Your P(Doom)™
30:58 Timelines for Artificial General Intelligence
34:49 Superhuman AI Capabilities
43:26 AI Reasoning and Creativity
47:12 Evil AI Scenario
01:08:06 Will the AI Cooperate With Us?
01:12:27 AI's Dependence on Human Labor
01:18:27 Will AI Keep Us Around to Study Us?
01:42:38 AI's Approach to Earth's Resources
01:53:22 Global AI Policies and Risks
02:03:02 The Quality of Doom Discourse
02:09:23 Liron’s Outro

Show Notes

Mike’s Instagram — https://www.instagram.com/drmikeisraetel

Mike’s YouTube — https://www.youtube.com/@MikeIsraetelMakingProgress

---

Come to the Less Online conference on May 30 - Jun 1, 2025: https://less.online
Hope to see you there!

---

Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligence

PauseAI, the volunteer organization I’m part of: https://pauseai.info

Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!

---

Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates
...

Doom Scenario: Human-Level AI Can't Control Smarter AI

May 6, 2025 12:44 am

I want to be transparent about how I’ve updated my mainline AI doom scenario in light of safe & useful LLMs. So here’s where I’m at…

00:00 Introduction
07:59 The Dangerous Threshold to Runaway Superintelligence
18:57 Superhuman Goal Optimization = Infinite Time Horizon
21:21 Goal-Completeness by Analogy to Turing-Completeness
26:53 Intellidynamics
29:13 Goal-Optimization Is Convergent
31:15 Early AIs Lose Control of Later AIs
34:46 The Superhuman Threshold Is Real
38:27 Expecting Rapid FOOM
40:20 Rocket Alignment
49:59 Stability of Values Under Self-Modification
53:13 The Way to Heaven Passes Right By Hell
57:32 My Mainline Doom Scenario
01:17:46 What Values Does The Goal Optimizer Have?


Show Notes

My recent episode with Jim Babcock on this same topic of mainline doom scenarios — https://www.youtube.com/watch?v=FaQjEABZ80g

The Rocket Alignment Problem by Eliezer Yudkowsky — https://www.lesswrong.com/posts/Gg9a4y8reWKtLe3Tn/the-rocket-alignment-problem

---

Come to the Less Online conference on May 30 - Jun 1, 2025: https://less.online

---

Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligence

PauseAI, the volunteer organization I’m part of: https://pauseai.info

Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!

---

Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates
...

The Most Likely AI Doom Scenario — with Jim Babcock, LessWrong Team

April 30, 2025 10:20 pm

What’s the most likely (“mainline”) AI doom scenario? How does the existence of LLMs update the original Yudkowskian version? I invited my friend Jim Babcock to help me answer these questions.

Jim is a member of the LessWrong engineering team and its parent organization, Lightcone Infrastructure. I’ve been a longtime fan of his thoughtful takes.

This turned out to be a VERY insightful and informative discussion, useful for clarifying my own predictions, and accessible to the show’s audience.

00:00 Introducing Jim Babcock
01:29 The Evolution of LessWrong Doom Scenarios
02:22 LessWrong’s Mission
05:49 The Rationalist Community and AI
09:37 What’s Your P(Doom)™
18:26 What Are Yudkowskians Surprised About?
26:48 Moral Philosophy vs. Goal Alignment
36:56 Sandboxing and AI Containment
42:51 Holding Yudkowskians Accountable
58:29 Understanding Next Word Prediction
01:00:02 Pre-Training vs Post-Training
01:08:06 The Rocket Alignment Problem Analogy
01:30:09 FOOM vs. Gradual Disempowerment
01:45:19 Recapping the Mainline Doom Scenario
01:52:08 Liron’s Outro

Show Notes

The Rocket Alignment Problem by Eliezer Yudkowsky — https://www.lesswrong.com/posts/Gg9a4y8reWKtLe3Tn/the-rocket-alignment-problem

Optimality is the Tiger and Agents Are Its Teeth — https://www.lesswrong.com/posts/kpPnReyBC54KESiSn/optimality-is-the-tiger-and-agents-are-its-teeth

Doom Debates episode about the research paper discovering AI's utility function — https://lironshapira.substack.com/p/cais-researchers-discover-ais-preferences

---

Come to the Less Online conference on May 30 - Jun 1, 2025: https://less.online

---

Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligence

PauseAI, the volunteer organization I’m part of: https://pauseai.info

---

Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!

Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates
...

AI could give humans MORE control — Ozzie Gooen

April 24, 2025 7:23 am

Ozzie Gooen is the founder of the Quantified Uncertainty Research Institute (QURI), a nonprofit building software tools for forecasting and policy analysis. I’ve known him through the rationality community since 2008 and we have a lot in common.

00:00 Introducing Ozzie
02:18 The Rationality Community
06:32 What’s Your P(Doom)™
08:09 High-Quality Discourse and Social Media
14:17 Guesstimate and Squiggle Demos
31:57 Prediction Markets and Rationality
38:33 Metaforecast Demo
41:23 Evaluating Everything with LLMs
47:00 Effective Altruism and FTX Scandal
56:00 The Repugnant Conclusion Debate
01:02:25 AI for Governance and Policy
01:12:07 PauseAI Policy Debate
01:30:10 Status Quo Bias
01:33:31 Decaf Coffee and Caffeine Powder
01:34:45 Are You Aspie?
01:37:45 Billionaires in Effective Altruism
01:48:06 Gradual Disempowerment by AI
01:55:36 LessOnline Conference
01:57:34 Supporting Ozzie’s Work

Show Notes

Quantified Uncertainty Research Institute (QURI) — https://quantifieduncertainty.org

Ozzie’s Facebook — https://www.facebook.com/ozzie.gooen

Ozzie’s Twitter — https://x.com/ozziegooen

Guesstimate, a spreadsheet for working with probability ranges — https://www.getguesstimate.com

Squiggle, a programming language for building Monte Carlo simulations — https://www.squiggle-language.com

Metaforecast, a prediction market aggregator — https://metaforecast.org

Open Annotate, AI-powered content analysis — https://github.com/quantified-uncertainty/open-annotate/

---

Come to the Less Online conference on May 30 - Jun 1, 2025: https://less.online

---

Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligence

PauseAI, the volunteer organization I’m part of: https://pauseai.info

Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!

---

Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates
...

Top AI Professor Has 85% P(Doom) — David Duvenaud, Fmr. Anthropic Safety Team Lead

April 18, 2025 10:30 pm

David Duvenaud is a professor of Computer Science at the University of Toronto, co-director of the Schwartz Reisman Institute for Technology and Society, former Alignment Evals Team Lead at Anthropic, an award-winning machine learning researcher, and a close collaborator of Dr. Geoffrey Hinton. He recently co-authored Gradual Disempowerment.

We dive into David’s impressive career, his high P(Doom), his recent tenure at Anthropic, his views on gradual disempowerment, and the critical need for improved governance and coordination on a global scale.

00:00 Introducing David
03:03 Joining Anthropic and AI Safety Concerns
35:58 David’s Background and Early Influences
45:11 AI Safety and Alignment Challenges
54:08 What’s Your P(Doom)™
01:06:44 Balancing Productivity and Family Life
01:10:26 The Hamming Question: Are You Working on the Most Important Problem?
01:16:28 The PauseAI Movement
01:20:28 Public Discourse on AI Doom
01:24:49 Courageous Voices in AI Safety
01:43:54 Coordination and Government Role in AI
01:47:41 Cowardice in AI Leadership
02:00:05 Economic and Existential Doom
02:06:12 Liron’s Post-Show

Show Notes

David’s Twitter — https://x.com/DavidDuvenaud

Schwartz Reisman Institute for Technology and Society — https://srinstitute.utoronto.ca/

Jürgen Schmidhuber’s Home Page — https://people.idsia.ch/~juergen/

Ryan Greenblatt's LessWrong comment about a future scenario where there's a one-time renegotiation of power and heat from superintelligent AI projects causes the oceans to boil: https://www.lesswrong.com/posts/pZhEQieM9otKXhxmd/gradual-disempowerment-systemic-existential-risks-from?commentId=T7KZGGqq2Z4gXZsty

--

Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligence

PauseAI, the volunteer organization I’m part of: https://pauseai.info

Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!

--

Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates
...

“AI 2027” — Top Superforecaster's Imminent Doom Scenario

April 15, 2025 2:25 am

AI 2027, a bombshell new paper by the AI Futures Project, is a highly plausible scenario of the next few years of AI progress. I like this paper so much that I made a whole episode about it.

00:00 Overview of AI 2027
05:13 2025: Stumbling Agents
16:23 2026: Advanced Agents
21:49 2027: The Intelligence Explosion
29:13 AI's Initial Exploits and OpenBrain's Secrecy
30:41 Agent-3 and the Rise of Superhuman Engineering
37:05 The Creation and Deception of Agent-5
44:56 The Race Scenario: Humanity's Downfall
48:58 The Slowdown Scenario: A Glimmer of Hope
53:49 Final Thoughts

Show Notes

The website: https://ai-2027.com

Scott Alexander’s blog: https://astralcodexten.com

Daniel Kokotajlo’s previous predictions from 2021 about 2026: https://www.lesswrong.com/posts/6Xgy6CAf2jqHhynHL/what-2026-looks-like

---

Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligence

PauseAI, the volunteer organization I’m part of: https://pauseai.info

Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!

---

Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates
...

Top Economist Sees AI Doom Coming — Dr. Peter Berezin, BCA Research

April 9, 2025 9:18 am

Dr. Peter Berezin is the Chief Global Strategist and Director of Research at BCA Research, the largest Canadian investment research firm. He’s known for his macroeconomics research reports and his frequent appearances on Bloomberg and CNBC.

Notably, Peter is one of the only macroeconomists in the world who’s forecasting AI doom! He recently published a research report estimating a “ more than 50/50 chance AI will wipe out all of humanity by the middle of the century”.

00:00 Introducing Peter Berezin
01:59 Peter’s Economic Predictions and Track Record
05:50 Investment Strategies and Beating the Market
17:47 The Future of Human Employment
26:40 Existential Risks and the Doomsday Argument
34:13 What’s Your P(Doom)™
39:18 Probability of non-AI Doom
44:19 Solving Population Decline
50:53 Constraining AI Development
53:40 The Multiverse and Its Implications
01:01:11 Are Other Economists Crazy?
01:09:19 Mathematical Universe and Multiverse Theories
01:19:43 Epistemic vs. Physical Probability
01:33:19 Reality Fluid
01:39:11 AI and Moral Realism
01:54:18 The Simulation Hypothesis and God
02:10:06 Liron’s Post-Show

Show Notes

Peter’s Twitter: https://x.com/PeterBerezinBCA

Peter’s old blog — https://stockcoach.blogspot.com

Peter’s 2021 BCA Research Report: “Life, Death and Finance in the Cosmic Multiverse” — https://www.bcaresearch.com/public/content/GIS_SR_2021_12_21.pdf

M.C. Escher’s “Circle Limit IV” — https://www.escherinhetpaleis.nl/escher-today/circle-limit-iv-heaven-and-hell/

---

Zvi Mowshowitz’s Blog (Liron’s recommendation for best AI news & analysis) — https://thezvi.substack.com

My Doom Debates episode about why nuclear proliferation is bad — https://www.youtube.com/watch?v=ueB9iRQsvQ8

Robin Hanson’s “Mangled Worlds” paper — https://mason.gmu.edu/~rhanson/mangledworlds.html

Uncontrollable by Darren McKee (Liron’s recommended AI x-risk book) — https://www.amazon.com/dp/B0CNNYKVH1

Our Mathematical Universe: My Quest for the Ultimate Nature of Reality by Max Tegmark (great book about multiverses that that Liron & Peter discussed) — https://www.amazon.com/Our-Mathematical-Universe-Ultimate-Reality/dp/0307599809

---

Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligence

PauseAI, the volunteer organization I’m part of: https://pauseai.info

Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!

---

Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates
...

AI News: GPT-4o Images, Unemployment, Emmett Shear's New Safety Org — with Nathan Labenz

April 4, 2025 12:50 am

Nathan Labenz, host of The Cognitive Revolution, joins us for a news and social media roundup!

00:00 Introducing Nate
05:18 What’s Your P(Doom)™
23:22 GPT-4o Image Gen
40:20 Will Fiverr’s Stock Crash?
47:41 AI Unemployment
55:11 Entrepreneurship
01:00:40 OpenAI Valuation
01:09:29 Connor Leahy’s Hair
01:13:28 Mass Extinction
01:25:30 Is anyone feeling the doom vibes?
01:38:20 Rethinking AI Individuality
01:40:35 “Softmax” — Emmett Shear's New AI Safety Org
01:57:04 Anthropic's Mechanistic Interpretability Paper
02:10:11 International Cooperation for AI Safety
02:18:43 Final Thoughts

Show Notes

Nate’s Twitter: https://x.com/labenz

Nate’s podcast: https://cognitiverevolution.ai and https://youtube.com/@CognitiveRevolutionPodcast

Nate’s company: https://waymark.com/

---

Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligence

PauseAI, the volunteer organization I’m part of: https://pauseai.info

Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!

Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates
...

How an AI Doomer Sees The World — Liron on The Human Podcast

March 28, 2025 6:12 am

In this special cross-posted episode of Doom Debates, originally posted here on The Human Podcast, we cover a wide range of topics including the definition of “doom”, P(Doom), various existential risks like pandemics and nuclear threats, and the comparison of rogue AI risks versus AI misuse risks.

00:00 Introduction
01:47 Defining Doom and AI Risks
05:53 P(Doom)
10:04 Doom Debates’ Mission
16:17 Personal Reflections and Life Choices
24:57 The Importance of Debate
27:07 Personal Reflections on AI Doom
30:46 Comparing AI Doom to Other Existential Risks
33:42 Strategies to Mitigate AI Risks
39:31 The Global AI Race and Game Theory
43:06 Philosophical Reflections on a Good Life
45:21 Final Thoughts

Show Notes

The Human Podcast with Joe Murray: https://www.youtube.com/@thehumanpodcastofficial

Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligence

PauseAI, the volunteer organization I’m part of: https://pauseai.info

Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!

Don’t miss the other great AI doom show, For Humanity: https://youtube.com/@ForHumanityAIRisk

Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates
...

Gödel's Theorem Says Intelligence ≠ Power? AI Doom Debate with Alexander Campbell

March 21, 2025 8:04 am

Alexander Campbell claims that having superhuman intelligence doesn’t necessarily translate into having vast power, and that Gödel's Incompleteness Theorem ensures AI can’t get too powerful. I strongly disagree.

Alex has a Master's of Philosophy in Economics from the University of Oxford and an MBA from the Stanford Graduate School of Business, has worked as a quant trader at Lehman Brothers and Bridgewater Associates, and is the founder of Rose AI, a cloud data platform that leverages generative AI to help visualize data.

This debate was recorded in August 2023.


00:00 Intro and Alex’s Background
05:29 Alex's Views on AI and Technology
06:45 Alex’s Non-Doomer Position
11:20 Goal-to-Action Mapping
15:20 Outcome Pump Thought Experiment
21:07 Liron’s Doom Argument
29:10 The Dangers of Goal-to-Action Mappers
34:39 The China Argument and Existential Risks
45:18 Ideological Turing Test
48:38 Final Thoughts

SHOW NOTES
Alexander Campbell’s Twitter: https://x.com/abcampbell

---

Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligence

---

PauseAI, the volunteer organization I’m part of: https://pauseai.info

Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!

---

Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates
...

Alignment is EASY and Roko's Basilisk is GOOD?! AI Doom Debate with Roko Mijic

March 17, 2025 5:09 am

Roko Mijic has been an active member of the LessWrong and AI safety community since 2008. He’s best known for “Roko’s Basilisk”, a thought experiment he posted on LessWrong that made Eliezer Yudkowsky freak out, and years later became the topic that helped Elon Musk get interested in Grimes.

His view on AI doom is that:
* AI alignment is an easy problem
* But the chaos and fighting from building superintelligence poses a high near-term existential risk
* But humanity’s course without AI has an even higher near-term existential risk

While my own view is very different, I’m interested to learn more about Roko’s views and nail down our cruxes of disagreement.

00:00 Introducing Roko
03:33 Realizing that AI is the only thing that matters
06:51 Cyc: AI with “common sense”
15:15 Is alignment easy?
21:19 What’s Your P(Doom)™
25:14 Why civilization is doomed anyway
37:07 Roko’s AI nightmare scenario
47:00 AI risk mitigation
52:07 Market Incentives and AI Safety
57:13 Are RL and GANs good enough for superalignment?
01:00:54 If humans learned to be honest, why can’t AIs?
01:10:29 Is our test environment sufficiently similar to production?
01:23:56 AGI Timelines
01:26:35 Headroom above human intelligence
01:42:22 Roko’s Basilisk
01:54:01 Post-Debate Monologue

SHOW NOTES

Roko’s Twitter: https://x.com/RokoMijic

Explanation of Roko’s Basilisk on LessWrong: https://www.lesswrong.com/w/rokos-basilisk

---

Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligence

PauseAI, the volunteer organization I’m part of: https://pauseai.info

Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!

---

Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates
...

Gödel's Theorem Proves AI Lacks Consciousness?! Liron Reacts to Sir Roger Penrose

March 10, 2025 6:19 am

Sir Roger Penrose is a mathematician, mathematical physicist, philosopher of science, and Nobel Laureate in Physics.

His famous body of work includes Penrose diagrams, twistor theory, Penrose tilings, and the incredibly bold claim that intelligence and consciousness are uncomputable physical phenomena related to quantum wave function collapse.

 Dr. Penrose is such a genius that it's just interesting to unpack his worldview, even if it’s totally implausible. How can someone like him be so wrong? What exactly is it that he's wrong about? It's interesting to try to see the world through his eyes, before recoiling from how nonsensical it looks.

00:00 Episode Highlights
01:29 Introduction to Roger Penrose
11:56 Uncomputability
16:52 Penrose on Gödel's Incompleteness Theorem
19:57 Liron Explains Gödel's Incompleteness Theorem
27:05 Why Penrose Gets Gödel Wrong
40:53 Scott Aaronson's Gödel CAPTCHA
46:28 Penrose's Critique of the Turing Test
48:01 Searle's Chinese Room Argument
52:07 Penrose's Views on AI and Consciousness
57:47 AI's Computational Power vs. Human Intelligence
01:21:08 Penrose's Perspective on AI Risk
01:22:20 Consciousness = Quantum Wave Function Collapse?
01:26:25 Final Thoughts


SHOW NOTES

Source video — Feb 22, 2025 Interview with Roger Penrose on “This Is World” — https://www.youtube.com/watch?v=biUfMZ2dts8

Scott Aaronson’s “Gödel CAPTCHA” — https://www.scottaaronson.com/writings/captcha.html

My recent Scott Aaronson episode — https://www.youtube.com/watch?v=xsGqWeqKjEg

My explanation of what’s wrong with arguing “by definition” — https://www.youtube.com/watch?v=ueam4fq8k8I

---

Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligence

PauseAI, the volunteer organization I’m part of: https://pauseai.info

Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!

---

Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates
...

We Found AI's Preferences — Bombshell New Safety Research — I Explain It Better Than David Shapiro

February 21, 2025 4:13 am

The Center for AI Safety just dropped a fascinating paper — they discovered that today’s AIs like GPT-4 and Claude have preferences! As in, coherent utility functions. We knew this was inevitable, but we didn’t know it was already happening.

In Part I (48 minutes), I react to David Shapiro’s coverage of the paper and push back on many of his points.
In Part II (60 minutes), I explain the paper myself.

00:00 Episode Introduction
05:25 PART I: REACTING TO DAVID SHAPIRO
10:06 Critique of David Shapiro's Analysis
19:19 Reproducing the Experiment
35:50 David's Definition of Coherence
37:14 Does AI have “Temporal Urgency”?
40:32 Universal Values and AI Alignment
49:13 PART II: EXPLAINING THE PAPER
51:37 How The Experiment Works
01:11:33 Instrumental Values and Coherence in AI
01:13:04 Exchange Rates and AI Biases
01:17:10 Temporal Discounting in AI Models
01:19:55 Power Seeking, Fitness Maximization, and Corrigibility
01:20:20 Utility Control and Bias Mitigation
01:21:17 Implicit Association Test
01:28:01 Emailing with the Paper’s Authors
01:43:23 My Takeaway

David’s source video: https://www.youtube.com/watch?v=XGu6ejtRz-0
The research paper: http://emergent-values.ai

Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligence
PauseAI, the volunteer organization I’m part of: https://pauseai.info
Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!

Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates
...

Does AI Competition = AI Alignment? Debate with Gil Mark

February 10, 2025 8:29 am

My friend Gil Mark, who leads generative AI products at LinkedIn, thinks competition among superintelligent AIs will lead to a good outcome for humanity. In his view, the alignment problem becomes significantly easier if we build multiple AIs at the same time and let them compete.

I completely disagree, but I hope you’ll find this to be a thought-provoking episode that sheds light on why the alignment problem is so hard.

00:00 Introduction
02:36 Gil & Liron’s Early Doom Days
04:58: AIs : Humans :: Humans : Ants
08:02 The Convergence of AI Goals
15:19 What’s Your P(Doom)™
19:23 Multiple AIs and Human Welfare
24:42 Gil’s Alignment Claim
42:31 Cheaters and Frankensteins
55:55 Superintelligent Game Theory
01:01:16 Slower Takeoff via Resource Competition
01:07:57 Recapping the Disagreement
01:15:39 Post-Debate Banter

Gil’s LinkedIn: https://www.linkedin.com/in/gilmark/
Gil’s Twitter: https://x.com/gmfromgm

Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligence

PauseAI, the volunteer organization I’m part of: https://pauseai.info

Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!

Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates
...

Toy Model of the AI Control Problem

February 6, 2025 8:30 pm

Why does the simplest AI imaginable, when you ask it to help you push a box around a grid, suddenly want you to die?

AI doomers are often misconstrued as having "no evidence" or just "anthropomorphizing". This toy model will help you understand why a drive to eliminate humans is NOT a handwavy anthropomorphic speculation, but rather something we expect by default from any sufficiently powerful search algorithm.

We’re not talking about AGI or ASI here — we’re just looking at an AI that does brute-force search over actions in a simple grid world.

The slide deck I’m presenting was created by Jaan Tallinn, cofounder of the Future of Life Institute.

00:00 Introduction
01:24 The Toy Model
06:19 Misalignment and Manipulation Drives
12:57 Search Capacity and Ontological Insights
16:33 Irrelevant Concepts in AI Control
20:14 Approaches to Solving AI Control Problems
23:38 Final Thoughts

---

Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligence

PauseAI, the volunteer organization I’m part of: https://pauseai.info

Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!

---

Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates
...

Superintelligent AI vs. Real-World Engineering | Liron Reacts to Bryan Cantrill

January 31, 2025 9:34 pm

Bryan Cantrill, co-founder of Oxide Computer, says engineering in the physical world is too complex for any AI to do it better than teams of human engineers. Success isn’t about intelligence; it’s about teamwork, character and resilience.

I completely disagree.

---

00:00 Introduction
02:03 Bryan’s Take on AI Doom
05:55 The Concept of P(Doom)
08:36 Engineering Challenges and Human Intelligence
15:09 The Role of Regulation and Authoritarianism in AI Control
29:44 Engineering Complexity: A Case Study from Oxide Computer
40:06 The Value of Team Collaboration
46:13 Human Attributes in Engineering
49:33 AI's Potential in Engineering
58:23 Existential Risks and AI Predictions

---

Bryan's original talk: https://www.youtube.com/watch?v=bQfJi7rjuEk

---

Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/watch?v=9CUFbqh16Fg

PauseAI, the volunteer organization I’m part of: https://pauseai.info

Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!

Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates
...

Lethal Intelligence Microblog

Blow your mind with the latest stories

Favorite Microbloggers

Receive important updates!

Your email will not be shared with anyone and won’t be used for any reason besides notifying you when we have important updates or new content

×