Liron Shapira

Liron Shapira is an entrepreneur, angel investor, and has acted as CEO and CTO in various Software startups. A Silicon Valley success story and a father of 3, he has somehow managed in parallel to be a “consistently candid AI doom pointer-outer” (to use his words) and in fact, one of the most influential voices in the AI Safety discourse.

A “contrarian” by nature, his arguments are sharp, to the point, ultra-rational and leaving you satisfied with your conviction that the only realistic exit off the Doom train for now is the “final stop” of pausing training of the next “frontier” models.

He often says that the ideas he represents are not his own and he jokes he is a “stochastic parrot” of other Thinking Giants in the field, but that is him being too humble and in fact he has contributed multiple examples of original thought ( i.e. the Goal Completeness analogy to Turing-Completeness for AGIs, the 3 major evolutionary discontinuities on earth, and more … ).

With his constant efforts to raise awareness for the general public, using his unique no-nonsense, layman-terms style of explaining advanced ideas in a simple way, he has in fact, done more for the future trajectory of events than he will ever know…

In June 2024 he launched an awesome addicting podcast, playfully named “Doom Debates”, which keeps getting better and better, so stay tuned.

In Defense of AI Doomerism | Robert Wright & Liron Shapira

May 16, 2024 8:46 pm

Subscribe to The Nonzero Newsletter at https://nonzero.substack.com
Exclusive Overtime discussion at: https://nonzero.substack.com/p/in-defense-of-ai-doomerism-robert

0:00 Why this pod’s a little odd
2:26 Ilya Sutskever and Jan Leike quit OpenAI—part of a larger pattern?
9:56 Bob: AI doomers need Hollywood
16:02 Does an AI arms race spell doom for alignment?
20:16 Why the “Pause AI” movement matters
24:30 AI doomerism and Don’t Look Up: compare and contrast
26:59 How Liron (fore)sees AI doom
32:54 Are Sam Altman’s concerns about AI safety sincere?
39:22 Paperclip maximizing, evolution, and the AI will to power question
51:10 Are there real-world examples of AI going rogue?
1:06:48 Should we really align AI to human values?
1:15:03 Heading to Overtime

Discussed in Overtime:
Anthropic vs OpenAI.
To survive an AI takeover… be like gut bacteria?
The Darwinian differences between humans and AI.
Should we treat AI like nuclear weapons?
Open source AI, China, and Cold War II.
Why time may be running out for an AI treaty.
How AI agents work (and don't).
GPT-5: evolution or revolution?
The thing that led Liron to AI doom.

Robert Wright (Nonzero, The Evolution of God, Why Buddhism Is True) and Liron Shapira (Pause AI, Relationship Hero). Recorded May 06, 2024. Additional segment recorded May 15, 2024.

Twitter: https://twitter.com/NonzeroPods
...

Getting ARRESTED for barricading OpenAI's office to Stop AI — Sam Kirchner and Remmelt Ellen

October 5, 2024 1:17 am

Sam Kirchner and Remmelt Ellen, leaders of the Stop AI movement, think the only way to effectively protest superintelligent AI development is with civil disobedience.

Not only are they staging regular protests in front of AI labs, they’re barricading the entrances and blocking traffic, then allowing themselves to be repeatedly arrested.

Is civil disobedience the right strategy to stop AI?


00:00 Introducing Stop AI
00:38 Arrested at OpenAI Headquarters
01:14 Stop AI’s Funding
01:26 Blocking Entrances Strategy
03:12 Protest Logistics and Arrest
08:13 Blocking Traffic
12:52 Arrest and Legal Consequences
18:31 Commitment to Nonviolence
21:17 A Day in the Life of a Protestor
21:38 Civil Disobedience
25:29 Planning the Next Protest
28:09 Stop AI Goals and Strategies
34:27 The Ethics and Impact of AI Protests
42:20 Call to Action

Show Notes
StopAI's next protest is on October 21, 2024 at OpenAI, 575 Florida St, San Francisco, CA 94110.

StopAI Website: https://StopAI.info
StopAI Discord: https://discord.gg/gbqGUt7ZN4

Disclaimer: I (Liron) am not part of StopAI, but I am a member of PauseAI, which also has a website and Discord you can join.

PauseAI Website: https://pauseai.info
PauseAI Discord: https://discord.gg/2XXWXvErfA
There's also a special #doom-debates channel in the PauseAI Discord just for us :)

Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at https://DoomDebates.com and to https://youtube.com/@DoomDebates
...

Liron Shapira on Superintelligence Goals

April 19, 2024 4:29 pm

Liron Shapira joins the podcast to discuss superintelligence goals, what makes AI different from other technologies, risks from centralizing power, and whether AI can defend us from AI.

Timestamps:
00:00 Intelligence as optimization-power
05:18 Will LLMs imitate human values?
07:15 Why would AI develop dangerous goals?
09:55 Goal-completeness
12:53 Alignment to which values?
22:12 Is AI just another technology?
31:20 What is FOOM?
38:59 Risks from centralized power
49:18 Can AI defend us against AI?
56:28 An Apollo program for AI safety
01:04:49 Do we only have one chance?
01:07:34 Are we living in a crucial time?
01:16:52 Would superintelligence be fragile?
01:21:42 Would human-inspired AI be safe?
...

Liron Shapira on the Case for Pausing AI

March 1, 2024 3:00 pm

This week on Upstream, Erik is joined by Liron Shapira to discuss the case against further AI development, why Effective Altruism doesn’t deserve its reputation, and what is misunderstood about nuclear weapons. Upstream is sponsored by Brave: Head to https://brave.com/brave-ads/ and mention “MoZ” when signing up for a 25% discount on your first campaign.
--
RECOMMENDED PODCAST: @History102-qg5oj with @WhatifAltHist
Every week, creator of WhatifAltHist Rudyard Lynch and Erik Torenberg cover a major topic in history in depth -- in under an hour. This season will cover classical Greece, early America, the Vikings, medieval Islam, ancient China, the fall of the Roman Empire, and more. Subscribe on
Spotify: https://open.spotify.com/show/36Kqo3BMMUBGTDo1IEYihm
Apple: https://podcasts.apple.com/us/podcast/history-102-with-whatifalthists-rudyard-lynch-and/id1730633913
--
We’re hiring across the board at Turpentine and for Erik’s personal team on other projects he’s incubating. He’s hiring a Chief of Staff, EA, Head of Special Projects, Investment Associate, and more. For a list of JDs, check out: https://eriktorenberg.com.
--
SPONSOR: BRAVE
Get first-party targeting with Brave’s private ad platform: cookieless and future proof ad formats for all your business needs. Performance meets privacy. Head to https://brave.com/brave-ads/ and mention “MoZ” when signing up for a 25% discount on your first campaign.
--
LINKS
Pause AI: https://pauseai.info/
--
X / TWITTER:
@liron (Liron)
@eriktorenberg (Erik)
@upstream__pod
@turpentinemedia
--
TIMESTAMPS:
(00:00) Intro and Liron's Background
(01:08) Liron's Thoughts on the e/acc Perspective
(03:59) Why Liron Doesn't Want AI to Take Over the World
(06:02) AI and the Future of Humanity
(10:40) AI is An Existential Threat to Humanity
(14:58) On Robin Hanson's Grabby Aliens Theory
(17:22 ) Sponsor - Brave
(18:20 ) AI as an Existential Threat: A Debate
(23:01) AI and the Potential for Global Coordination
(27:03) Liron's Reaction on Vitalik Buterin's Perspective on AI and the Future
(31:16) Power Balance in Warfare: Defense vs Offense
(32:20) Nuclear Proliferation in Modern Society
(38:19) Why There's a Need for a Pause in AI Development
(43:57) Is There Evidence of AI Being Bad?
(44:57) Liron On George Hotz's Perspective
(49:17) Timeframe Between Extinction
(50:53) Humans Are Like Housecats Or White Blood Cells
(53:11) The Doomer Argument
(01:00:00 )The Role of Effective Altruism in Society
(01:03:12) Wrap
--
Upstream is a production from Turpentine
Producer: Sam Kaufman
Editor: Eul Jose Lacierda

For guest or sponsorship inquiries please contact [email protected]

Music license:
VEEBHLBACCMNCGEK
...

Can GPT o1 Reason? | Liron Reacts to Tim Scarfe & Keith Duggar

September 18, 2024 4:06 am

How smart is OpenAI’s new model, o1? What does "reasoning" ACTUALLY mean? What do computability theory and complexity theory tell us about the limitations of LLMs?

Dr. Tim Scarf and Dr. Keith Duggar, hosts of the popular Machine Learning Street Talk podcast, posted an interesting video discussing these issues… FOR ME TO DISAGREE WITH!!!

00:00 Introduction
02:14 Computability Theory
03:40 Turing Machines
07:04 Complexity Theory and AI
23:47 Reasoning
44:24 o1
47:00 Finding gold in the Sahara
56:20 Self-Supervised Learning and Chain of Thought
01:04:01 The Miracle of AI Optimization
01:23:57 Collective Intelligence
01:25:54 The Argument Against LLMs' Reasoning
01:49:29 The Swiss Cheese Metaphor for AI Knowledge
02:02:37 Final Thoughts

Original source: https://www.youtube.com/watch?v=nO6sDk6vO0g

Follow Machine Learning Street Talk: https://www.youtube.com/@MachineLearningStreetTalk


Doom Debates Substack: https://DoomDebates.com

^^^ Seriously subscribe to this! ^^^
...

Arvind Narayanan Makes AI Sound Normal | Liron Reacts

August 29, 2024 11:26 am

Today I’m reacting to the 20VC podcast with Harry Stebbings and Princeton professor Arvind Narayanan: https://www.youtube.com/watch?v=8CvjVAyB4O4

Prof. Narayanan is known for his critical perspective on the misuse and over-hype of artificial intelligence, which he often refers to as “AI snake oil”. Narayanan’s critiques aim to highlight the gap between what AI can realistically achieve, and the often misleading promises made by companies and researchers.

I analyze Arvind’s takes on the comparative dangers of AI and nuclear weapons, the limitations of current AI models, and AI’s trajectory toward being a commodity rather than a superintelligent god.

00:00 Introduction

01:21 Arvind’s Perspective on AI

02:07 Debating AI's Compute and Performance

03:59 Synthetic Data vs. Real Data

05:59 The Role of Compute in AI Advancement

07:30 Challenges in AI Predictions

26:30 AI in Organizations and Tacit Knowledge

33:32 The Future of AI: Exponential Growth or Plateau?

36:26 Relevance of Benchmarks

39:02 AGI

40:59 Historical Predictions

46:28 OpenAI vs. Anthropic

52:13 Regulating AI

56:12 AI as a Weapon

01:02:43 Sci-Fi

01:07:28 Conclusion

Follow Arvind Narayanan: https://x.com/random_walker

Follow Harry Stebbings: https://x.com/HarryStebbings

Join the conversation at https://DoomDebates.com or https://youtube.com/@DoomDebates, suggest topics or guests, and help us spread awareness about the urgent risk of AI extinction. Thanks for watching.
...

Episode #44: “AI P-Doom Debate: 50% vs 99.999%” For Humanity: An AI Risk Podcast

September 4, 2024 3:06 pm

In Episode #44, host John Sherman brings back friends of For Humanity Dr. Roman Yamopolskiy and Liron Shapira. Roman is an influential AI safety researcher, thought leader, and Associate Professor at the University of Louisville. Liron is a tech CEO and host of the excellent Doom Debates podcast. Roman famously holds a 99.999% p-doom, Liron has a nuanced 50%. John starts out at 75%, unrelated to their numbers. Where are you? Did Roman or Liron move you in their direction at all? Let us know in the comments!

LEARN HOW TO HELP RAISE AI RISK AWARENESS IN YOUR COMMUNITY HERE
https://pauseai.info/local-organizing

Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast

EMAIL JOHN: [email protected]

This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.

For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.

Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

RESOURCES:

BUY ROMAN’S NEW BOOK ON AMAZON
https://a.co/d/fPG6lOB

SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
https://www.youtube.com/@DoomDebates

JOIN THE FIGHT, help Pause AI!!!!
Pause AI

Join the Pause AI Weekly Discord Thursdays at 2pm EST
/ discord
https://discord.com/invite/pVMWjddaW7

Max Winga’s “A Stark Warning About Extinction”
https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22

For Humanity Theme Music by Josef Ebner
Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg
Website: https://josef.pictures

BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom

22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
https://www.safe.ai/work/statement-on-ai-risk

Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes
...

David Shapiro Part II: Unaligned Superintelligence Is Totally Fine?

August 22, 2024 10:13 am

Today I’m reacting to David Shapiro’s response to my previous episode: https://www.youtube.com/watch?v=vZhK43kMCeM

And also to David’s latest episode with poker champion & effective altruist Igor Kurganov: https://www.youtube.com/watch?v=XUZ4P3e2iaA

I challenge David's optimistic stance on superintelligent AI inherently aligning with human values. We touch on factors like instrumental convergence and resource competition. David and I continue to clash over whether we should pause AI development to mitigate potential catastrophic risks. I also respond to David's critiques of AI safety advocates.

00:00 Introduction
01:08 David's Response and Engagement
03:02 The Corrigibility Problem
05:38 Nirvana Fallacy
10:57 Prophecy and Faith-Based Assertions
22:47 AI Coexistence with Humanity
35:17 Does Curiosity Make AI Value Humans?
38:56 Instrumental Convergence and AI's Goals
46:14 The Fermi Paradox and AI's Expansion
51:51 The Future of Human and AI Coexistence
01:04:56 Concluding Thoughts

Join the conversation on https://DoomDebates.com or https://youtube.com/@DoomDebates, suggest topics or guests, and help us spread awareness about the urgent risk of extinction. Thanks for watching.
...

Liron Reacts to Mike Israetel's "Solving the AI Alignment Problem"

July 18, 2024 10:56 am

Can a guy who can kick my ass physically also do it intellectually?

Dr. Mike Israetel is a well-known bodybuilder and fitness influencer with over 600,000 Instagram followers, and a surprisingly intelligent commentator on other subjects, including a whole recent episode on the AI alignment problem:

https://www.youtube.com/watch?v=PqJe-O7yM3g

Mike brought up many interesting points that were worth responding to, making for an interesting reaction episode. I also appreciate that he’s helping get the urgent topic of AI alignment in front of a mainstream audience.

Unfortunately, Mike doesn’t engage with the possibility that AI alignment is an intractable technical problem on a 5-20 year timeframe, which I think is more likely than not. That’s the crux of why he and I disagree, and why I see most of his episode as talking past most other intelligent positions people take on AI alignment. I hope he’ll keep engaging with the topic and rethink his position.

00:00 Introduction
03:08 AI Risks and Scenarios
06:42 Superintelligence Arms Race
12:39 The Importance of AI Alignment
18:10 Challenges in Defining Human Values
26:11 The Outer and Inner Alignment Problems
44:00 Transhumanism and AI's Potential
45:42 The Next Step In Evolution
47:54 AI Alignment and Potential Catastrophes
50:48 Scenarios of AI Development
54:03 The AI Alignment Problem
01:07:39 AI as a Helper System
01:08:53 Corporations and AI Development
01:10:19 The Risk of Unaligned AI
01:27:18 Building a Superintelligent AI
01:30:57 Conclusion

Follow Mike Israetel:
https://youtube.com/@MikeIsraetelMakingProgress
https://instagram.com/drmikeisraetel

Get the full Doom Debates experience:
1. Subscribe to this channel: https://youtube.com/@DoomDebates
2. Subscribe to my Substack: https://DoomDebates.com
3. Search "Doom Debates" to subscribe in your podcast player
4. Follow me at https://x.com/liron
...

"The default outcome is... we all DIE" | Liron Shapira on AI risk

July 25, 2023 9:58 am

The full episode of episode six of the Complete Tech Heads podcast, with Liron Shapira, founder, technologist, and self-styled AI doom pointer-outer.

Includes an intro to AI risk, thoughts on a new tier of intelligence, a variety of rebuttals to Marc Andreesen's recent essay on AI, thoughts on how AI might plausibly take over and kill all humans, the rise and danger of AI girlfriends, Open AI's new super alignment team, Elon Musk's latest AI safety venture XAI, and other topics.

#technews #ai #airisks
...

"AI Risk=Jenga" For Humanity, An AI Safety Podcast Episode #17, Liron Shapira Interview

February 28, 2024 3:51 pm

In Episode #17, AI Risk + Jenga, Liron Shapira Interview, John talks with tech CEO and AI Risk Activist Liron Shapira about a broad range of AI risk topics centered around existential risk. Liron likens AI Risk to a game of Jenga, where there are a finite number of pieces, and each one you pull out leaves you one closer to collapse. He says something like Sora, seemingly just a video innovation, could actually end all life on earth.

This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.

For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.

Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

Resources:

PAUSE AI DISCORD
https://discord.gg/pVMWjddaW7

Liron's Youtube Channel:
https://youtube.com/@liron00?si=cqIo5DUPAzHkmdkR

More on rationalism:
https://www.lesswrong.com/

More on California State Senate Bill SB-1047:
https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202320240SB1047&utm_source=substack&utm_medium=email

https://thezvi.substack.com/p/on-the-proposed-california-sb-1047?utm_source=substack&utm_medium=email

Warren Wolf
Warren Wolf, "Señor Mouse" - The Checkout: Live at Berklee
https://youtu.be/OZDwzBnn6uc?si=o5BjlRwfy7yuIRCL
...

AI Doom Debate - Liron Shapira vs. Mikael Koivukangas

May 16, 2024 3:17 am

Mikael thinks the doom argument is loony because he doesn't see computers as being able to have human-like agency any time soon.

I attempted to understand his position and see if I could move him toward a higher P(doom).
...

Toy Model of the AI Control Problem

April 1, 2024 7:04 pm

Slides by Jaan Tallinn
Voiceover explanation by Liron Shapira

Would a superintelligent AI have a survival instinct?
Would it intentionally deceive us?
Would it murder us?

Doomers who warn about these possibilities often get accused of having "no evidence", or just "anthropomorphizing". It's understandable why people could assume that, because superintelligent AI acting on the physical world is such a complex topic, and they're confused about it themselves.

So instead of Artificial Superintelligence (ASI), let's analyze a simpler toy model that leaves no room for anthropomorphism to creep in: an AI that's simply a brute-force search algorithm over actions in a simple gridworld.

Why does the simplest AI imaginable, when you ask it to help you push a box around a grid, suddenly want you to die? ☠️

This toy model will help you understand why a drive to eliminate humans is *not* a handwavy anthropomorphic speculation, but something we expect by default from any sufficiently powerful search algorithm.
...

#10: Liron Shapira - AI doom, FOOM, rationalism, and crypto

December 26, 2023 3:09 am

Liron Shapira is an entrepreneur, angel investor, and CEO of counseling startup Relationship Hero. He’s also a rationalist, advisor for the Machine Intelligence Research Institute and Center for Applied Rationality, and a consistently candid AI doom pointer-outer.
- Liron’s Twitter: https://twitter.com/liron
- Liron’s Substack: https://lironshapira.substack.com/
- Liron’s old blog, Bloated MVP: https://www.bloatedmvp.com

TJP LINKS:
- TRANSCRIPT: https://www.theojaffee.com/p/10-liron-shapira
- Spotify:
- Apple Podcasts:
- RSS: https://api.substack.com/feed/podcast/989123/s/75569/private/129f6344-c459-4581-a9da-dc331677c2f6.rss
- Playlist of all episodes: https://www.youtube.com/playlist?list=PLVN8-zhbMh9YnOGVRT9m0xzqTNGD_sujj
- My Twitter: https://x.com/theojaffee
- My Substack: https://www.theojaffee.com

CHAPTERS:
Intro (0:00)
Non-AI x-risks (0:53)
AI non-x-risks (3:00)
p(doom) (5:21)
Liron vs. Eliezer (12:18)
Why might doom not happen? (15:42)
Elon Musk and AGI (17:12)
Alignment vs. Governance (20:24)
Scott Alexander lowering p(doom) (22:32)
Human minds vs ASI minds (28:01)
Vitalik Buterin and d/acc (33:30)
Carefully bootstrapped alignment (35:22)
GPT vs AlphaZero (41:55)
Belrose & Pope AI Optimism (43:17)
AI doom meets daily life (57:57)
Israel vs. Hamas (1:02:17)
Rationalism (1:06:15)
Crypto (1:14:50)
Charlie Munger and Richard Feynman (1:22:12)
...

Liron reacts to "Intelligence Is Not Enough" by Bryan Cantrill

December 12, 2023 6:04 pm

Bryan Cantrill claims "intelligence isn't enough" for engineering complex systems in the real world.

I wasn't moved by his arguments, but I think they're worth a look, and I appreciate smart people engaging in this discourse.

Bryan's talk: https://www.youtube.com/watch?v=bQfJi7rjuEk
...

Liron Shapira - a conversation about conversations about AI

September 22, 2023 2:33 am

Liron Shapira, tech entrepeneur and angel investor, is also a vocal activist for AI safety. He has engaged in several lively debates on the topic, including with George Hotz and also an online group that calls themselves the "Effective Accelerationists", both of whom disagree with the idea of AI becoming extremely dangerous in the foreseeable future.

In this interview, we discuss hopes and worries regarding the state of AI safety, debate as a means of social change, and what is needed to elevate the discourse on AI.

Liron's debate with George Hotz: https://www.youtube.com/watch?v=lt4vR6XQk-o
Liron's debate with "Beff Jezos" (of e/acc): https://www.youtube.com/watch?v=f71yn1j5Uyc

Alignment Workshop: https://www.youtube.com/@AlignmentWorkshop (referenced at 6:00)
...

There’s No Off Button: AI Existential Risk Interview with Liron Shapira

September 21, 2023 7:51 pm

Liron Shapira is a rationalist, startup founder and angel investor. He studied theoretical Computer Science at UC Berkeley. Since 2007 he's been closely following AI existential risk research through his association with the Machine Intelligence Research Institute and LessWrong.
Computerphile (Rob Miles Channel): https://www.youtube.com/watch?v=3TYT1QfdfsM
...

AI Foom Debate: Liron Shapira vs. Beff Jezos (e/acc) on Sep 1, 2023

September 7, 2023 11:21 pm

My debate from an X Space on Sep 1, 2023 hosted by Chris Prucha ...

AI Doom Debate: Liron Shapira vs. Alexander Campbell

August 5, 2023 6:32 am

What's a goal-to-action mapper? How powerful can it be?

How much do Gödel's Theorem & Halting Problem limit AI's powers?

How do we operationalize a ban on dangerous AI that doesn't also ban other tech like smartphones?
...

Liron Shapira: Web3 mania, Cybersecurity, how AI could brick the universe | Madvertising #6

April 6, 2023 9:46 pm

In this episode of the AdQuick Madvertising podcast, Adam Singer interviews Liron Shapira to talk Web 3 mania, cybersecurity, and go deep into AI existential and business risks and opportunities.

Follow Liron: https://twitter.com/liron

Follow AdQuick
Twitter: https://twitter.com/adquick
LinkedIn: https://linkedin.com/company/adquick
Visit http://adquick.com to get started telling the world your story

Listen on Spotify
https://open.spotify.com/show/03FnBsaXiB1nUsEaIeYr4d

Listen on Apple Podcasts:
https://podcasts.apple.com/us/podcast/adquick-madvertising-podcast/id1670723215

Folow the hosts:
Chris Gadek Twitter: https://twitter.com/dappermarketer
Adam Singer Twitter: https://twitter.com/adamsinger
...

This week on Doom Debates...

71 minutes ago

Reply with your questions for Roon! ...

Can LLMs Reason? Liron Reacts to Subbarao Kambhampati on Machine Learning Street Talk

November 28, 2024 11:09 am

Today I’m reacting to a July 2024 interview that Prof. Subbarao Kambhampati did on Machine Learning Street Talk.

Rao is a Professor of Computer Science at Arizona State University, and one of the foremost voices making the claim that while LLMs can generate creative ideas, they can’t truly reason.

The episode covers a range of topics including planning, creativity, the limits of LLMs, and why Rao thinks LLMs are essentially advanced N-gram models.

00:00 Introduction
02:54 Essentially N-Gram Models?
10:31 The Manhole Cover Question
20:54 Reasoning vs. Approximate Retrieval
47:03 Explaining Jokes
53:21 Caesar Cipher Performance
01:10:44 Creativity vs. Reasoning
01:33:37 Reasoning By Analogy
01:48:49 Synthetic Data
01:53:54 The ARC Challenge
02:11:47 Correctness vs. Style
02:17:55 AIs Becoming More Robust
02:20:11 Block Stacking Problems
02:48:12 PlanBench and Future Predictions
02:58:59 Final Thoughts


Rao’s interview on Machine Learning Street Talk: https://www.youtube.com/watch?v=y1WnHpedi2A

Rao’s Twitter: https://x.com/rao2z

---

PauseAI Website: https://pauseai.info

PauseAI Discord: https://discord.gg/2XXWXvErfA

Watch the Lethal Intelligence video and check out LethalIntelligence.ai! It’s an AWESOME new animated intro to AI risk.

---

Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at https://DoomDebates.com and to https://youtube.com/@DoomDebates
...

This Yudkowskian Has A 99.999% P(Doom)

November 27, 2024 9:17 am

In this episode of Doom Debates, I discuss AI existential risks with my pseudonymous guest Nethys.

Nethy shares his journey into AI risk awareness, influenced heavily by LessWrong and Eliezer Yudkowsky. We explore the vulnerability of society to emerging technologies, the challenges of AI alignment, and why he believes our current approaches are insufficient, ultimately resulting in 99.999% P(Doom).

00:00 Nethys Introduction
04:47 The Vulnerable World Hypothesis
10:01 What’s Your P(Doom)™
14:04 Nethys’s Banger YouTube Comment
26:53 Living with High P(Doom)
31:06 Losing Access to Distant Stars
36:51 Defining AGI
39:09 The Convergence of AI Models
47:32 The Role of “Unlicensed” Thinkers
52:07 The PauseAI Movement
58:20 Lethal Intelligence Video Clip


SHOW NOTES

Eliezer Yudkowsky’s post on “Death with Dignity”: https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/miri-announces-new-death-with-dignity-strategy

PauseAI Website: https://pauseai.info

PauseAI Discord: https://discord.gg/2XXWXvErfA

Watch the Lethal Intelligence video and check out LethalIntelligence.ai! It’s an AWESOME new animated intro to AI risk.

---
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at https://DoomDebates.com and to https://youtube.com/@DoomDebates
...

Cosmology, AI Doom, and the Future of Humanity with Fraser Cain

November 21, 2024 10:36 am

Fraser Cain is the publisher of Universe Today, co-host of Astronomy Cast, a popular YouTuber about all things space, and guess what… he has a high P(doom)! That’s why he’s joining me on Doom Debates for a very special AI + space crossover episode.

00:00 Fraser Cain’s Background and Interests
5:03 What’s Your P(Doom)™
07:05 Our Vulnerable World
15:11 Don’t Look Up
22:18 Cosmology and the Search for Alien Life
31:33 Stars = Terrorists
39:03 The Great Filter and the Fermi Paradox
55:12 Grabby Aliens Hypothesis
01:19:40 Life Around Red Dwarf Stars?
01:22:23 Epistemology of Grabby Aliens
01:29:04 Multiverses
01:33:51 Quantum Many Worlds vs. Copenhagen Interpretation
01:47:25 Simulation Hypothesis
01:51:25 Final Thoughts

Show Notes

Fraser’s YouTube channel: https://www.youtube.com/@frasercain

Universe Today (space and astronomy news): https://www.universetoday.com/


Max Tegmark’s book that explains 4 levels of multiverses: https://www.amazon.com/Our-Mathematical-Universe-Ultimate-Reality/dp/0307744256


Robin Hanson’s ideas:

Grabby Aliens: https://grabbyaliens.com

The Great Filter: https://en.wikipedia.org/wiki/Great_Filter

Life in a high-dimensional space: https://www.overcomingbias.com/p/life-in-1kdhtml


---
Watch the Lethal Intelligence video and check out LethalIntelligence.ai! It’s an AWESOME new animated intro to AI risk.
---


Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at https://DoomDebates.com and to https://youtube.com/@DoomDebates
...

AI Doom Debate: Vaden Masrani & Ben Chugg vs. Liron Shapira

November 19, 2024 9:37 am

Vaden Masrani and Ben Chugg, hosts of the Increments Podcast, are back for a Part II! This time we’re going straight to debating my favorite topic, AI doom.

00:00 Introduction
02:23 High-Level AI Doom Argument
17:06 How Powerful Could Intelligence Be?
22:34 “Knowledge Creation”
48:33 “Creativity”
54:57 Stand-Up Comedy as a Test for AI
01:12:53 Vaden & Ben’s Goalposts
01:15:00 How to Change Liron’s Mind
01:20:02 LLMs are Stochastic Parrots?
01:34:06 Tools vs. Agents
01:39:51 Instrumental Convergence and AI Goals
01:45:51 Intelligence vs. Morality
01:53:57 Mainline Futures
02:16:50 Lethal Intelligence Video

SHOW NOTES

Vaden & Ben’s Podcast: https://www.youtube.com/@incrementspod

Recommended playlists from their podcast:
1. The Bayesian vs Popperian Epistemology Series: https://www.youtube.com/playlist?list=PLg2GgQMJHr2S0qHkdmq_GC-n6bI7SSIp7
2. The Conjectures and Refutations Series: https://www.youtube.com/playlist?list=PLg2GgQMJHr2TajSch9Ixh8szz1c9SJgo6

Vaden’s Twitter: https://x.com/vadenmasrani
Ben’s Twitter: https://x.com/BennyChugg

---

Watch the Lethal Intelligence video! It’s an AWESOME new animated intro to AI risk: https://www.youtube.com/watch?v=9CUFbqh16Fg

Subscribe to their channel: https://youtube.com/@lethal-intelligence

Check out https://lethalintelligence.ai

---

Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at https://DoomDebates.com and to https://youtube.com/@DoomDebates
...

Andrew Critch vs. Liron Shapira: Will AI Extinction Be Fast Or Slow?

November 16, 2024 7:36 am

Dr. Andrew Critch is the co-founder of the Center for Applied Rationality, a former Research Fellow at the Machine Intelligence Research Institute (MIRI), a Research Scientist at the UC Berkeley Center for Human Compatible AI, and the co-founder of a new startup called Healthcare Agents.

Dr. Critch’s P(Doom) is a whopping 85%! But his most likely doom scenario isn’t what you might expect. He thinks humanity will successfully avoid a self-improving superintelligent doom scenario, only to still go extinct via the slower process of “industrial dehumanization”.

00:00 Introduction
01:43 Dr. Critch’s Perspective on LessWrong Sequences
06:45 Bayesian Epistemology
15:34 Dr. Critch's Time at MIRI
18:33 What’s Your P(Doom)™
26:35 Doom Scenarios
40:38 AI Timelines
43:09 Defining “AGI”
48:27 Superintelligence
53:04 The Speed Limit of Intelligence
01:12:03 The Obedience Problem in AI
01:21:22 Artificial Superintelligence and Human Extinction
01:24:36 Global AI Race and Geopolitics
01:34:28 Future Scenarios and Human Relevance
01:48:13 Extinction by Industrial Dehumanization
01:58:50 Automated Factories and Human Control
02:02:35 Global Coordination Challenges
02:27:00 Healthcare Agents
02:35:30 Final Thoughts

***Show Notes***

Dr. Critch’s LessWrong post explaining his P(Doom) and most likely doom scenarios: https://www.lesswrong.com/posts/Kobbt3nQgv3yn29pr/my-motivation-and-theory-of-change-for-working-in-ai

Dr. Critch’s Website: https://acritch.com/

Dr. Critch’s Twitter: https://twitter.com/AndrewCritchPhD

---

Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at https://DoomDebates.com and to https://youtube.com/@DoomDebates
...

AI Twitter Beefs #2: Yann LeCun, David Deutsch, Tyler Cowen vs. Eliezer Yudkowsky, Geoffrey Hinton

November 13, 2024 10:25 am

It’s time for AI Twitter Beefs #2:
00:42 Jack Clark (Anthropic) vs. Holly Elmore (PauseAI US)
11:02 Beff Jezos vs. Eliezer Yudkowsky, Carl Feynman
18:10 Geoffrey Hinton vs. OpenAI & Meta
25:14 Samuel Hammond vs. Liron
30:26 Yann LeCun vs. Eliezer Yudkowsky
37:13 Roon vs. Eliezer Yudkowsky
41:37 Tyler Cowen vs. AI Doomers
52:54 David Deutsch vs. Liron

Twitter people referenced:
Jack Clark: https://x.com/jackclarkSF
Holly Elmore: https://x.com/ilex_ulmus
PauseAI US: https://x.com/PauseAIUS
Geoffrey Hinton: https://x.com/GeoffreyHinton
Samuel Hammond: https://x.com/hamandcheese
Yann LeCun: https://x.com/ylecun
Eliezer Yudkowsky: https://x.com/esyudkowsky
Roon: https://x.com/tszzl
Beff Jezos: https://x.com/basedbeffjezos
Carl Feynman: https://x.com/carl_feynman
Tyler Cowen: https://x.com/tylercowen
David Deutsch: https://x.com/DavidDeutschOxf

SHOW NOTES

Holly Elmore’s EA forum post about scouts vs. soldiers: https://forum.effectivealtruism.org/posts/efE6K5QCfzNTSb5pf/scouts-need-soldiers-for-their-work-to-be-worth-anything

Manifund info & donation page for PauseAI US: https://manifund.org/projects/pauseai-us-2025-through-q2

https://PauseAI.info - join the Discord and find me in the #doom-debates channel!


---

Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at DoomDebates.com and to https://youtube.com/@DoomDebates
...

Is P(Doom) Meaningful? Epistemology Debate with Vaden Masrani and Ben Chugg

November 8, 2024 2:20 am

Vaden Masrani and Ben Chugg, hosts of the Increments Podcast, are joining me to debate Bayesian vs. Popperian epistemology.

I’m on the Bayesian side, heavily influenced by the writings of Eliezer Yudkowsky. Vaden and Ben are on the Popperian side, heavily influenced by David Deutsch and the writings of Popper himself.

We dive into the theoretical underpinnings of Bayesian reasoning and Solomonoff induction, contrasting them with the Popperian perspective, and explore real-world applications such as predicting elections and economic policy outcomes.

The debate highlights key philosophical differences between our two epistemological frameworks, and sets the stage for further discussions on superintelligence and AI doom scenarios in an upcoming Part II.

00:00 Introducing Vaden and Ben
02:51 Setting the Stage: Epistemology and AI Doom
04:50 What’s Your P(Doom)™
13:29 Popperian vs. Bayesian Epistemology
31:09 Engineering and Hypotheses
38:01 Solomonoff Induction
45:21 Analogy to Mathematical Proofs
48:42 Popperian Reasoning and Explanations
54:35 Arguments Against Bayesianism
58:33 Against Probability Assignments
01:21:49 Popper’s Definition of “Content”
01:31:22 Heliocentric Theory Example
01:31:34 “Hard to Vary” Explanations
01:44:42 Coin Flipping Example
01:57:37 Expected Value
02:12:14 Prediction Market Calibration
02:19:07 Futarchy
02:29:14 Prediction Markets as AI Lower Bound
02:39:07 A Test for Prediction Markets
2:45:54 Closing Thoughts


SHOW NOTES

Vaden & Ben’s Podcast: https://www.youtube.com/@incrementspod
Vaden’s Twitter: https://x.com/vadenmasrani
Ben’s Twitter: https://x.com/BennyChugg

Bayesian reasoning: https://en.wikipedia.org/wiki/Bayesian_inference

Karl Popper: https://en.wikipedia.org/wiki/Karl_Popper

Vaden's blog post on Cox's Theorem and Yudkowsky's claims of "Laws of Rationality": https://vmasrani.github.io/blog/2021/the_credence_assumption/

Vaden’s disproof of probabilistic induction (including Solomonoff Induction): https://arxiv.org/abs/2107.00749

Vaden’s referenced post about predictions being uncalibrated beyond 1yr out: https://forum.effectivealtruism.org/posts/hqkyaHLQhzuREcXSX/data-on-forecasting-accuracy-across-different-time-horizons#Calibrations

Article by Gavin Leech and Misha Yagudin on the reliability of forecasters: https://ifp.org/can-policymakers-trust-forecasters/

Sources for claim that superforecasters gave a P(doom) below 1%: https://80000hours.org/2024/09/why-experts-and-forecasters-disagree-about-ai-risk/
https://www.astralcodexten.com/p/the-extinction-tournament

Vaden’s Slides on Content vs Probability: https://vmasrani.github.io/assets/pdf/popper_good.pdf

---

Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at https://DoomDebates.com and to https://youtube.com/@DoomDebates
...

15-Minute Intro to AI Doom

November 4, 2024 11:57 am

Our top researchers and industry leaders have been warning us that superintelligent AI may cause human extinction in the next decade. If you haven't been following all the urgent warnings, I'm here to bring you up to speed:

* Human-level AI is coming soon
* It’s an existential threat to humanity
* The situation calls for urgent action

Listen to this 15-minute intro to get the lay of the land. Then follow these links to learn more and see how you can help:

* The Compendium
https://www.thecompendium.ai/
A longer written introduction to AI doom by Connor Leahy et al

* AGI Ruin — A list of lethalities
https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities/
A comprehensive list by Eliezer Yudkowksy of reasons why developing superintelligent AI is unlikely to go well for humanity

* AISafety.info
https://aisafety.info
A catalogue of AI doom arguments and responses to objections

* PauseAI.info
https://pauseai.info/
The largest volunteer org focused on lobbying world government to pause development of superintelligent AI

* PauseAI Discord
https://discord.gg/2XXWXvErfA
Chat with PauseAI members, see a list of projects and get involved

---

Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at https://DoomDebates.com and to https://youtube.com/@DoomDebates
...

Lee Cronin vs. Liron Shapira: AI Doom Debate and Assembly Theory Questions

October 30, 2024 11:56 pm

Prof. Lee Cronin is the Regius Chair of Chemistry at the University of Glasgow. His research aims to understand how life might arise from non-living matter. In 2017, he invented “Assembly Theory” as a way to measure the complexity of molecules and gain insight into the earliest evolution of life.

Today we’re debating Lee's claims about the limits of AI capabilities, and my claims about the risk of extinction from superintelligent AGI.

00:00 Introduction
04:20 Assembly Theory
05:10 Causation and Complexity
10:07 Assembly Theory in Practice
12:23 The Concept of Assembly Index
16:54 Assembly Theory Beyond Molecules
30:13 P(Doom)
32:39 The Statement on AI Risk
42:18 Agency and Intent
47:10 RescueBot’s Intent vs. a Clock’s
53:42 The Future of AI and Human Jobs
57:34 The Limits of AI Creativity
01:04:33 The Complexity of the Human Brain
01:19:31 Superintelligence: Fact or Fiction?
01:29:35 Final Thoughts

Lee’s Wikipedia: https://en.wikipedia.org/wiki/Leroy_Cronin
Lee’s Twitter: https://x.com/leecronin
Lee’s paper on Assembly Theory: https://arxiv.org/abs/2206.02279

Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at https://DoomDebates.com and to https://youtube.com/@DoomDebates
...

Ben Horowitz says nuclear proliferation is GOOD? I disagree.

October 25, 2024 5:14 am

Ben Horowitz, cofounder and General Partner at Andreessen Horowitz (a16z), says nuclear proliferation is good. I was shocked because I thought we all agreed nuclear proliferation is VERY BAD.

If Ben and a16z can’t appreciate the existential risks of nuclear weapons proliferation, why would anyone ever take them seriously on the topic of AI regulation?

00:00 Introduction
00:49 Ben Horowitz on Nuclear Proliferation
02:12 Ben Horowitz on Open Source AI
05:31 Nuclear Non-Proliferation Treaties
10:25 Escalation Spirals
15:20 Rogue Actors
16:33 Nuclear Accidents
17:19 Safety Mechanism Failures
20:34 The Role of Human Judgment in Nuclear Safety
21:39 The 1983 Soviet Nuclear False Alarm
22:50 a16z’s Disingenuousness
23:46 Martin Casado and Marc Andreessen
24:31 Nuclear Equilibrium
26:52 Why I Care
28:09 Wrap Up

Sources of this episode’s video clips:

Ben Horowitz’s interview on Upstream with Erik Torenberg: https://www.youtube.com/watch?v=oojc96r3Kuo

Martin Casado and Marc Andreessen talking about AI on the a16z Podcast: https://www.youtube.com/watch?v=0wIUK0nsyUg

Roger Skaer’s TikTok: https://www.tiktok.com/@rogerskaer

George W. Bush and John Kerry Presidential Debate (September 30, 2004): https://www.youtube.com/watch?v=WYpP-T0IcyA

Barack Obama’s Prague Remarks on Nuclear Disarmament: https://www.youtube.com/watch?v=QKSn1SXjj2s

John Kerry’s Remarks at the 2015 Nuclear Nonproliferation Treaty Review Conference: https://www.youtube.com/watch?v=LsY1AZc1K7w


Show notes:

Nuclear War, A Scenario by Annie Jacobsen: https://www.amazon.com/Nuclear-War-Scenario-Annie-Jacobsen/dp/0593476093

Dr. Strangelove or: How I learned to Stop Worrying and Love the Bomb: https://en.wikipedia.org/wiki/Dr._Strangelove

1961 Goldsboro B-52 Crash: https://en.wikipedia.org/wiki/1961_Goldsboro_B-52_crash

1983 Soviet Nuclera False Alarm Incident: https://en.wikipedia.org/wiki/1983_Soviet_nuclear_false_alarm_incident

List of military nuclear accidents: https://en.wikipedia.org/wiki/List_of_military_nuclear_accidents

---

Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at https://DoomDebates.com and to https://youtube.com/@DoomDebates.
...

“AI Snake Oil” Prof. Arvind Narayanan Can't See AGI Coming | Liron Reacts

October 13, 2024 9:14 am

Today I’m reacting to Arvind Narayanan’s interview with Robert Wright on the Nonzero podcast: https://www.youtube.com/watch?v=MoB_pikM3NY

Dr. Narayanan is a Professor of Computer Science and the Director of the Center for Information Technology Policy at Princeton. He just published a new book called AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference.

Arvind claims AI is “normal technology like the internet”, and never sees fit to bring up the impact or urgency of AGI. So I’ll take it upon myself to point out all the questions where someone who takes AGI seriously would give different answers.


00:00 Introduction
01:49 AI is “Normal Technology”?
09:25 Playing Chess vs. Moving Chess Pieces
12:23 AI Has To Learn From Its Mistakes?
22:24 The Symbol Grounding Problem and AI's Understanding
35:56 Human vs AI Intelligence: The Fundamental Difference
36:37 The Cognitive Reflection Test
41:34 The Role of AI in Cybersecurity
43:21 Attack vs. Defense Balance in (Cyber)War
54:47 Taking AGI Seriously
01:06:15 Final Thoughts

SHOW NOTES

The original Nonzero podcast episode with Arvind Narayanan and Robert Wright: https://www.youtube.com/watch?v=MoB_pikM3NY

Arvind’s new book, AI Snake Oil: https://www.amazon.com/Snake-Oil-Artificial-Intelligence-Difference-ebook/dp/B0CW1JCKVL

Arvind’s Substack: https://aisnakeoil.com

Arvind’s Twitter: https://x.com/random_walker

Robert Wright’s Twitter: https://x.com/robertwrighter

Robert Wright’s Nonzero Newsletter: https://nonzero.substack.com

Rob’s excellent post about symbol grounding (Yes, AIs ‘understand’ things): https://nonzero.substack.com/p/yes-ais-understand-things

My previous episode of Doom Debates reacting to Arvind Narayanan on Harry Stebbings’ podcast: https://www.youtube.com/watch?v=lehJlitQvZE
...

Dr. Keith Duggar (Machine Learning Street Talk) vs. Liron Shapira — AI Doom Debate

October 9, 2024 12:27 am

Dr. Keith Duggar from Machine Learning Street Talk was the subject of my recent reaction episode about whether GPT o1 can reason: https://www.youtube.com/watch?v=59PTmetkPCY

But instead of ignoring or blocking me, Keith was brave enough to come into the lion’s den and debate his points with me… and his P(doom) might shock you!

First we debate whether Keith’s distinction between Turing Machines and Discrete Finite Automata is useful for understanding limitations of current LLMs. Then I take Keith on a tour of alignment, orthogonality, instrumental convergence, and other popular stations on the “doom train”, to compare our views on each.

Keith was a great sport and I think this episode is a classic!

---

00:00 Introduction
00:46 Keith’s Background
03:02 Keith’s P(doom)
14:09 Are LLMs Turing Machines?
19:09 Liron Concedes on a Point!
21:18 Do We Need ≥1MB of Context?
27:02 Examples to Illustrate Keith’s Point
33:56 Is Terence Tao a Turing Machine?
38:03 Factoring Numbers: Human vs. LLM
53:24 Training LLMs with Turing-Complete Feedback
1:02:22 What Does the Pillar Problem Illustrate?
01:05:40 Boundary between LLMs and Brains
1:08:52 The 100-Year View
1:18:29 Intelligence vs. Optimization Power
1:23:13 Is Intelligence Sufficient To Take Over?
01:28:56 The Hackable Universe and AI Threats
01:31:07 Nuclear Extinction vs. AI Doom
1:33:16 Can We Just Build Narrow AI?
01:37:43 Orthogonality Thesis and Instrumental Convergence
01:40:14 Debating the Orthogonality Thesis
02:03:49 The Rocket Alignment Problem
02:07:47 Final Thoughts

---

SHOW NOTES

Keith’s show: https://www.youtube.com/@MachineLearningStreetTalk

Keith’s Twitter: https://x.com/doctorduggar

Keith’s fun brain teaser that LLMs can’t solve yet, about a pillar with four holes: https://youtu.be/nO6sDk6vO0g?si=diGUY7jW4VFsV0TJ&t=3684

Eliezer Yudkowsky’s classic post about the “Rocket Alignment Problem”: https://intelligence.org/2018/10/03/rocket-alignment/

---

Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at https://DoomDebates.com and to https://youtube.com/@DoomDebates.

📣 You can now chat with me and other listeners in the "doom-debates" channel of the PauseAI discord: https://discord.gg/2XXWXvErfA
...

Getting ARRESTED for barricading OpenAI's office to Stop AI — Sam Kirchner and Remmelt Ellen

October 5, 2024 1:17 am

Sam Kirchner and Remmelt Ellen, leaders of the Stop AI movement, think the only way to effectively protest superintelligent AI development is with civil disobedience.

Not only are they staging regular protests in front of AI labs, they’re barricading the entrances and blocking traffic, then allowing themselves to be repeatedly arrested.

Is civil disobedience the right strategy to stop AI?


00:00 Introducing Stop AI
00:38 Arrested at OpenAI Headquarters
01:14 Stop AI’s Funding
01:26 Blocking Entrances Strategy
03:12 Protest Logistics and Arrest
08:13 Blocking Traffic
12:52 Arrest and Legal Consequences
18:31 Commitment to Nonviolence
21:17 A Day in the Life of a Protestor
21:38 Civil Disobedience
25:29 Planning the Next Protest
28:09 Stop AI Goals and Strategies
34:27 The Ethics and Impact of AI Protests
42:20 Call to Action

Show Notes
StopAI's next protest is on October 21, 2024 at OpenAI, 575 Florida St, San Francisco, CA 94110.

StopAI Website: https://StopAI.info
StopAI Discord: https://discord.gg/gbqGUt7ZN4

Disclaimer: I (Liron) am not part of StopAI, but I am a member of PauseAI, which also has a website and Discord you can join.

PauseAI Website: https://pauseai.info
PauseAI Discord: https://discord.gg/2XXWXvErfA
There's also a special #doom-debates channel in the PauseAI Discord just for us :)

Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at https://DoomDebates.com and to https://youtube.com/@DoomDebates
...

Q&A #1 Part 2: Stock Picking, Creativity, Types of Doomers, Favorite Books

October 3, 2024 12:31 am

This episode is a continuation of Q&A #1 Part 1 where I answer YOUR questions! Part 1 is here: https://www.youtube.com/watch?v=BXg_HEAEf8s

00:00 Introduction
01:20 Planning for a good outcome?
03:10 Stock Picking Advice
08:42 Dumbing It Down for Dr. Phil
11:52 Will AI Shorten Attention Spans?
12:55 Historical Nerd Life
14:41 YouTube vs. Podcast Metrics
16:30 Video Games
26:04 Creativity
30:29 Does AI Doom Explain the Fermi Paradox?
36:37 Grabby Aliens
37:29 Types of AI Doomers
44:44 Early Warning Signs of AI Doom
48:34 Do Current AIs Have General Intelligence?
51:07 How Liron Uses AI
53:41 Is “Doomer” a Good Term?
57:11 Liron’s Favorite Books
01:05:21 Effective Altruism
01:06:36 The Doom Debates Community


SHOW NOTES

PauseAI Discord: https://discord.gg/2XXWXvErfA

Robin Hanson’s Grabby Aliens theory: https://grabbyaliens.com

Prof. David Kipping’s response to Robin Hanson’s Grabby Aliens: https://www.youtube.com/watch?v=tR1HTNtcYw0

My explanation of “AI completeness”, but actually I made a mistake because the term I previously coined is “goal completeness”: https://www.lesswrong.com/posts/iFdnb8FGRF4fquWnc/goal-completeness-is-like-turing-completeness-for-agi

^ Goal-Completeness (and the corresponding Shapira-Yudkowsky Thesis) might be my best/only original contribution to AI safety research, albeit a small one. Max Tegmark even retweeted it.

a16z’s Ben Horowitz claiming nuclear proliferation is good, actually: https://x.com/liron/status/1690087501548126209


---

Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at https://DoomDebates.com and to https://youtube.com/@DoomDebates
...

Q&A #1 Part 1: College, Asperger's, Elon Musk, Double Crux, Liron's IQ

October 1, 2024 9:16 am

Thanks for being one of the first Doom Debates subscribers and sending in your questions! This episode is Part 1; stay tuned for Part 2 coming soon.

00:00 Introduction
01:17 Is OpenAI a sinking ship?
07:25 College Education
13:20 Asperger's
16:50 Elon Musk: Genius or Clown?
22:43 Double Crux
32:04 Why Call Doomers a Cult?
36:45 How I Prepare Episodes
40:29 Dealing with AI Unemployment
44:00 AI Safety Research Areas
46:09 Fighting a Losing Battle
53:03 Liron’s IQ
01:00:24 Final Thoughts

Explanation of Double Crux

https://www.lesswrong.com/posts/exa5kmvopeRyfJgCy/double-crux-a-strategy-for-mutual-understanding

Best Doomer Arguments

The LessWrong sequences by Eliezer Yudkowsky: https://ReadTheSequences.com
LethalIntelligence.ai — Directory of people who are good at explaining doom
Rob Miles’ Explainer Videos: https://www.youtube.com/c/robertmilesai
For Humanity Podcast with John Sherman - https://www.youtube.com/@ForHumanityPodcast
PauseAI community — https://PauseAI.info — join the Discord!
AISafety.info — Great reference for various arguments

Best Non-Doomer Arguments

Carl Shulman — https://www.dwarkeshpatel.com/p/carl-shulman
Quintin Pope and Nora Belrose — https://optimists.ai
Robin Hanson — https://www.youtube.com/watch?v=dTQb6N3_zu8

How I prepared to debate Robin Hanson

Ideological Turing Test (me taking Robin’s side): https://www.youtube.com/watch?v=iNnoJnuOXFA
Walkthrough of my outline of prepared topics: https://www.youtube.com/watch?v=darVPzEhh-I

Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at https://DoomDebates.com and to https://youtube.com/@DoomDebates
...

Arguing "By Definition" | Rationality 101

September 29, 2024 6:29 pm

Welcome to Rationality 101, where I explain a post from Eliezer Yudkowsky's famous LessWrong Sequences: https://www.lesswrong.com/posts/cFzC996D7Jjds3vS9/arguing-by-definition

0:00 - Why syllogisms are fake arguments
0:27 - Socrates syllogism rings hollow
4:01 - Prof. Lee Cronin tries to use a syllogism to support a claim about AI
6:45 - When *can* definitions add value?
8:11 - How the definition of "Optimization Power" adds value
10:29 - The role of definitions in science is to be part of elegant explanatory models that compress our observations
10:50 - A warning to catch yourself trying to argue "by definition"

---
THE DOOM DEBATES MISSION

Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at https://DoomDebates.com and my channel https://youtube.com/@doomdebates
...

Doom Tiffs #1: Amjad Masad, Eliezer Yudkowsky, Roon, Lee Cronin, Naval Ravikant, Martin Casado

September 25, 2024 7:45 am

Finally a reality show specifically focused on people's conduct in the discourse around AI existential risk!

In today’s episode, instead of reacting to a long-form presentation of someone’s position, I’m reporting on the various AI x-risk-related tiffs happening in my part of the world. And by “my part of the world” I mean my Twitter feed.

00:00 Introduction
01:55 Followup to my MSLT reaction episode
03:48 Double Crux
04:53 LLMs: Finite State Automata or Turing Machines?
16:11 Amjad Masad vs. Helen Toner and Eliezer Yudkowsky
17:29 How Will AGI Literally Kill Us?
33:53 Roon
37:38 Prof. Lee Cronin
40:48 Defining AI Creativity
43:44 Naval Ravikant
46:57 Pascal's Scam
54:10 Martin Casado and SB 1047
01:12:26 Final Thoughts

Links referenced in the episode:
* Eliezer Yudkowsky’s interview on the Logan Bartlett Show. Highly recommended: https://www.youtube.com/watch?v=_8q9bjNHeSo
* Double Crux, the core rationalist technique I use when I’m “debating”: https://www.lesswrong.com/posts/exa5kmvopeRyfJgCy/double-crux-a-strategy-for-mutual-understanding

Twitter people referenced:
* Amjad Masad: https://x.com/amasad
* Eliezer Yudkowsky: https://x.com/esyudkowsky
* Helen Toner: https://x.com/hlntnr
* Lee Cronin: https://x.com/leecronin
* Naval Ravikant: https://x.com/naval
* Geoffrey Miller: https://x.com/primalpoly
* Martin Casado: https://x.com/martin_casado
* Your boy: https://x.com/liron

### THE DOOM DEBATES MISSION ###

Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at https://DoomDebates.com and my channel https://youtube.com/@DoomDebates. Thanks for watching.
...

Rationality 101: The Bottom Line

September 21, 2024 4:57 pm

Welcome to Rationality 101, where I explain a post from Eliezer Yudkowsky's famous LessWrong Sequences: https://www.lesswrong.com/posts/34XxbRFe54FycoCDw/the-bottom-line

"Your effectiveness as a rationalist is determined by whichever algorithm actually writes the bottom line of your thoughts." Meditate on that sentence and be enlightened.

This is why we should be wary when a chatbot prints the answer to your question in its first token rather than its last token. The "explanation of their answer" which they write next may not have ANY causal correspondence with the algorithm that wrote their first token.

(Recorded earlier this year.)

Ok now we're getting to the bottom line of this video description. What will it say???

...

...

...please subscribe to my Substack. https://DoomDebates.com
...

Can GPT o1 Reason? | Liron Reacts to Tim Scarfe & Keith Duggar

September 18, 2024 4:06 am

How smart is OpenAI’s new model, o1? What does "reasoning" ACTUALLY mean? What do computability theory and complexity theory tell us about the limitations of LLMs?

Dr. Tim Scarf and Dr. Keith Duggar, hosts of the popular Machine Learning Street Talk podcast, posted an interesting video discussing these issues… FOR ME TO DISAGREE WITH!!!

00:00 Introduction
02:14 Computability Theory
03:40 Turing Machines
07:04 Complexity Theory and AI
23:47 Reasoning
44:24 o1
47:00 Finding gold in the Sahara
56:20 Self-Supervised Learning and Chain of Thought
01:04:01 The Miracle of AI Optimization
01:23:57 Collective Intelligence
01:25:54 The Argument Against LLMs' Reasoning
01:49:29 The Swiss Cheese Metaphor for AI Knowledge
02:02:37 Final Thoughts

Original source: https://www.youtube.com/watch?v=nO6sDk6vO0g

Follow Machine Learning Street Talk: https://www.youtube.com/@MachineLearningStreetTalk


Doom Debates Substack: https://DoomDebates.com

^^^ Seriously subscribe to this! ^^^
...

Interviews and Talks

Industry Leaders and Notable Public Figures

Explainers

Learn about the issue by some of the best explainers out there

Lethal Intelligence Microblog

Blow your mind with the latest stories

Favorite Microbloggers

Receive important updates!

Your email will not be shared with anyone and won’t be used for any reason besides notifying you when we have important updates or new content

×