Liron Shapira

Liron Shapira is an entrepreneur, angel investor, and has acted as CEO and CTO in various Software startups. A Silicon Valley success story and a father of 3, he has somehow managed in parallel to be a “consistently candid AI doom pointer-outer” (to use his words) and in fact, one of the most influential voices in the AI Safety discourse.

A “contrarian” by nature, his arguments are sharp, to the point, ultra-rational and leaving you satisfied with your conviction that the only realistic exit off the Doom train for now is the “final stop” of pausing training of the next “frontier” models.

He often says that the ideas he represents are not his own and he jokes he is a “stochastic parrot” of other Thinking Giants in the field, but that is him being too humble and in fact he has contributed multiple examples of original thought ( i.e. the Goal Completeness analogy to Turing-Completeness for AGIs, the 3 major evolutionary discontinuities on earth, and more … ).

With his constant efforts to raise awareness for the general public, using his unique no-nonsense, layman-terms style of explaining advanced ideas in a simple way, he has in fact, done more for the future trajectory of events than he will ever know…

In June 2024 he launched an awesome addicting podcast, playfully named “Doom Debates”, which keeps getting better and better, so stay tuned.

The open problem of AI Corrigibility explained by Liron Shapira

Complexity is in the eye of the beholder – by Liron Shapira

AI perceives humans as plants

Rocket Alignment Analogy

In Defense of AI Doomerism | Robert Wright & Liron Shapira

May 16, 2024 8:46 pm

Subscribe to The Nonzero Newsletter at https://nonzero.substack.com
Exclusive Overtime discussion at: https://nonzero.substack.com/p/in-defense-of-ai-doomerism-robert

0:00 Why this pod’s a little odd
2:26 Ilya Sutskever and Jan Leike quit OpenAI—part of a larger pattern?
9:56 Bob: AI doomers need Hollywood
16:02 Does an AI arms race spell doom for alignment?
20:16 Why the “Pause AI” movement matters
24:30 AI doomerism and Don’t Look Up: compare and contrast
26:59 How Liron (fore)sees AI doom
32:54 Are Sam Altman’s concerns about AI safety sincere?
39:22 Paperclip maximizing, evolution, and the AI will to power question
51:10 Are there real-world examples of AI going rogue?
1:06:48 Should we really align AI to human values?
1:15:03 Heading to Overtime

Discussed in Overtime:
Anthropic vs OpenAI.
To survive an AI takeover… be like gut bacteria?
The Darwinian differences between humans and AI.
Should we treat AI like nuclear weapons?
Open source AI, China, and Cold War II.
Why time may be running out for an AI treaty.
How AI agents work (and don't).
GPT-5: evolution or revolution?
The thing that led Liron to AI doom.

Robert Wright (Nonzero, The Evolution of God, Why Buddhism Is True) and Liron Shapira (Pause AI, Relationship Hero). Recorded May 06, 2024. Additional segment recorded May 15, 2024.

Twitter: https://twitter.com/NonzeroPods
...

Getting ARRESTED for barricading OpenAI's office to Stop AI — Sam Kirchner and Remmelt Ellen

October 5, 2024 1:17 am

Sam Kirchner and Remmelt Ellen, leaders of the Stop AI movement, think the only way to effectively protest superintelligent AI development is with civil disobedience.

Not only are they staging regular protests in front of AI labs, they’re barricading the entrances and blocking traffic, then allowing themselves to be repeatedly arrested.

Is civil disobedience the right strategy to stop AI?


00:00 Introducing Stop AI
00:38 Arrested at OpenAI Headquarters
01:14 Stop AI’s Funding
01:26 Blocking Entrances Strategy
03:12 Protest Logistics and Arrest
08:13 Blocking Traffic
12:52 Arrest and Legal Consequences
18:31 Commitment to Nonviolence
21:17 A Day in the Life of a Protestor
21:38 Civil Disobedience
25:29 Planning the Next Protest
28:09 Stop AI Goals and Strategies
34:27 The Ethics and Impact of AI Protests
42:20 Call to Action

Show Notes
StopAI's next protest is on October 21, 2024 at OpenAI, 575 Florida St, San Francisco, CA 94110.

StopAI Website: https://StopAI.info
StopAI Discord: https://discord.gg/gbqGUt7ZN4

Disclaimer: I (Liron) am not part of StopAI, but I am a member of PauseAI, which also has a website and Discord you can join.

PauseAI Website: https://pauseai.info
PauseAI Discord: https://discord.gg/2XXWXvErfA
There's also a special #doom-debates channel in the PauseAI Discord just for us :)

Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at https://DoomDebates.com and to https://youtube.com/@DoomDebates
...

Liron Shapira on Superintelligence Goals

April 19, 2024 4:29 pm

Liron Shapira joins the podcast to discuss superintelligence goals, what makes AI different from other technologies, risks from centralizing power, and whether AI can defend us from AI.

Timestamps:
00:00 Intelligence as optimization-power
05:18 Will LLMs imitate human values?
07:15 Why would AI develop dangerous goals?
09:55 Goal-completeness
12:53 Alignment to which values?
22:12 Is AI just another technology?
31:20 What is FOOM?
38:59 Risks from centralized power
49:18 Can AI defend us against AI?
56:28 An Apollo program for AI safety
01:04:49 Do we only have one chance?
01:07:34 Are we living in a crucial time?
01:16:52 Would superintelligence be fragile?
01:21:42 Would human-inspired AI be safe?
...

Liron Shapira on the Case for Pausing AI

March 1, 2024 3:00 pm

This week on Upstream, Erik is joined by Liron Shapira to discuss the case against further AI development, why Effective Altruism doesn’t deserve its reputation, and what is misunderstood about nuclear weapons. Upstream is sponsored by Brave: Head to https://brave.com/brave-ads/ and mention “MoZ” when signing up for a 25% discount on your first campaign.
--
RECOMMENDED PODCAST: @History102-qg5oj with @WhatifAltHist
Every week, creator of WhatifAltHist Rudyard Lynch and Erik Torenberg cover a major topic in history in depth -- in under an hour. This season will cover classical Greece, early America, the Vikings, medieval Islam, ancient China, the fall of the Roman Empire, and more. Subscribe on
Spotify: https://open.spotify.com/show/36Kqo3BMMUBGTDo1IEYihm
Apple: https://podcasts.apple.com/us/podcast/history-102-with-whatifalthists-rudyard-lynch-and/id1730633913
--
We’re hiring across the board at Turpentine and for Erik’s personal team on other projects he’s incubating. He’s hiring a Chief of Staff, EA, Head of Special Projects, Investment Associate, and more. For a list of JDs, check out: https://eriktorenberg.com.
--
SPONSOR: BRAVE
Get first-party targeting with Brave’s private ad platform: cookieless and future proof ad formats for all your business needs. Performance meets privacy. Head to https://brave.com/brave-ads/ and mention “MoZ” when signing up for a 25% discount on your first campaign.
--
LINKS
Pause AI: https://pauseai.info/
--
X / TWITTER:
@liron (Liron)
@eriktorenberg (Erik)
@upstream__pod
@turpentinemedia
--
TIMESTAMPS:
(00:00) Intro and Liron's Background
(01:08) Liron's Thoughts on the e/acc Perspective
(03:59) Why Liron Doesn't Want AI to Take Over the World
(06:02) AI and the Future of Humanity
(10:40) AI is An Existential Threat to Humanity
(14:58) On Robin Hanson's Grabby Aliens Theory
(17:22 ) Sponsor - Brave
(18:20 ) AI as an Existential Threat: A Debate
(23:01) AI and the Potential for Global Coordination
(27:03) Liron's Reaction on Vitalik Buterin's Perspective on AI and the Future
(31:16) Power Balance in Warfare: Defense vs Offense
(32:20) Nuclear Proliferation in Modern Society
(38:19) Why There's a Need for a Pause in AI Development
(43:57) Is There Evidence of AI Being Bad?
(44:57) Liron On George Hotz's Perspective
(49:17) Timeframe Between Extinction
(50:53) Humans Are Like Housecats Or White Blood Cells
(53:11) The Doomer Argument
(01:00:00 )The Role of Effective Altruism in Society
(01:03:12) Wrap
--
Upstream is a production from Turpentine
Producer: Sam Kaufman
Editor: Eul Jose Lacierda

For guest or sponsorship inquiries please contact [email protected]

Music license:
VEEBHLBACCMNCGEK
...

Can GPT o1 Reason? | Liron Reacts to Tim Scarfe & Keith Duggar

September 18, 2024 4:06 am

How smart is OpenAI’s new model, o1? What does "reasoning" ACTUALLY mean? What do computability theory and complexity theory tell us about the limitations of LLMs?

Dr. Tim Scarf and Dr. Keith Duggar, hosts of the popular Machine Learning Street Talk podcast, posted an interesting video discussing these issues… FOR ME TO DISAGREE WITH!!!

00:00 Introduction
02:14 Computability Theory
03:40 Turing Machines
07:04 Complexity Theory and AI
23:47 Reasoning
44:24 o1
47:00 Finding gold in the Sahara
56:20 Self-Supervised Learning and Chain of Thought
01:04:01 The Miracle of AI Optimization
01:23:57 Collective Intelligence
01:25:54 The Argument Against LLMs' Reasoning
01:49:29 The Swiss Cheese Metaphor for AI Knowledge
02:02:37 Final Thoughts

Original source: https://www.youtube.com/watch?v=nO6sDk6vO0g

Follow Machine Learning Street Talk: https://www.youtube.com/@MachineLearningStreetTalk


Doom Debates Substack: https://DoomDebates.com

^^^ Seriously subscribe to this! ^^^
...

Arvind Narayanan Makes AI Sound Normal | Liron Reacts

August 29, 2024 11:26 am

Today I’m reacting to the 20VC podcast with Harry Stebbings and Princeton professor Arvind Narayanan: https://www.youtube.com/watch?v=8CvjVAyB4O4

Prof. Narayanan is known for his critical perspective on the misuse and over-hype of artificial intelligence, which he often refers to as “AI snake oil”. Narayanan’s critiques aim to highlight the gap between what AI can realistically achieve, and the often misleading promises made by companies and researchers.

I analyze Arvind’s takes on the comparative dangers of AI and nuclear weapons, the limitations of current AI models, and AI’s trajectory toward being a commodity rather than a superintelligent god.

00:00 Introduction

01:21 Arvind’s Perspective on AI

02:07 Debating AI's Compute and Performance

03:59 Synthetic Data vs. Real Data

05:59 The Role of Compute in AI Advancement

07:30 Challenges in AI Predictions

26:30 AI in Organizations and Tacit Knowledge

33:32 The Future of AI: Exponential Growth or Plateau?

36:26 Relevance of Benchmarks

39:02 AGI

40:59 Historical Predictions

46:28 OpenAI vs. Anthropic

52:13 Regulating AI

56:12 AI as a Weapon

01:02:43 Sci-Fi

01:07:28 Conclusion

Follow Arvind Narayanan: https://x.com/random_walker

Follow Harry Stebbings: https://x.com/HarryStebbings

Join the conversation at https://DoomDebates.com or https://youtube.com/@DoomDebates, suggest topics or guests, and help us spread awareness about the urgent risk of AI extinction. Thanks for watching.
...

Episode #44: “AI P-Doom Debate: 50% vs 99.999%” For Humanity: An AI Risk Podcast

September 4, 2024 3:06 pm

In Episode #44, host John Sherman brings back friends of For Humanity Dr. Roman Yamopolskiy and Liron Shapira. Roman is an influential AI safety researcher, thought leader, and Associate Professor at the University of Louisville. Liron is a tech CEO and host of the excellent Doom Debates podcast. Roman famously holds a 99.999% p-doom, Liron has a nuanced 50%. John starts out at 75%, unrelated to their numbers. Where are you? Did Roman or Liron move you in their direction at all? Let us know in the comments!

LEARN HOW TO HELP RAISE AI RISK AWARENESS IN YOUR COMMUNITY HERE
https://pauseai.info/local-organizing

Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast

EMAIL JOHN: [email protected]

This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.

For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.

Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

RESOURCES:

BUY ROMAN’S NEW BOOK ON AMAZON
https://a.co/d/fPG6lOB

SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
https://www.youtube.com/@DoomDebates

JOIN THE FIGHT, help Pause AI!!!!
Pause AI

Join the Pause AI Weekly Discord Thursdays at 2pm EST
/ discord
https://discord.com/invite/pVMWjddaW7

Max Winga’s “A Stark Warning About Extinction”
https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22

For Humanity Theme Music by Josef Ebner
Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg
Website: https://josef.pictures

BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom

22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
https://www.safe.ai/work/statement-on-ai-risk

Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes
...

David Shapiro Part II: Unaligned Superintelligence Is Totally Fine?

August 22, 2024 10:13 am

Today I’m reacting to David Shapiro’s response to my previous episode: https://www.youtube.com/watch?v=vZhK43kMCeM

And also to David’s latest episode with poker champion & effective altruist Igor Kurganov: https://www.youtube.com/watch?v=XUZ4P3e2iaA

I challenge David's optimistic stance on superintelligent AI inherently aligning with human values. We touch on factors like instrumental convergence and resource competition. David and I continue to clash over whether we should pause AI development to mitigate potential catastrophic risks. I also respond to David's critiques of AI safety advocates.

00:00 Introduction
01:08 David's Response and Engagement
03:02 The Corrigibility Problem
05:38 Nirvana Fallacy
10:57 Prophecy and Faith-Based Assertions
22:47 AI Coexistence with Humanity
35:17 Does Curiosity Make AI Value Humans?
38:56 Instrumental Convergence and AI's Goals
46:14 The Fermi Paradox and AI's Expansion
51:51 The Future of Human and AI Coexistence
01:04:56 Concluding Thoughts

Join the conversation on https://DoomDebates.com or https://youtube.com/@DoomDebates, suggest topics or guests, and help us spread awareness about the urgent risk of extinction. Thanks for watching.
...

Liron Reacts to Mike Israetel's "Solving the AI Alignment Problem"

July 18, 2024 10:56 am

Can a guy who can kick my ass physically also do it intellectually?

Dr. Mike Israetel is a well-known bodybuilder and fitness influencer with over 600,000 Instagram followers, and a surprisingly intelligent commentator on other subjects, including a whole recent episode on the AI alignment problem:

https://www.youtube.com/watch?v=PqJe-O7yM3g

Mike brought up many interesting points that were worth responding to, making for an interesting reaction episode. I also appreciate that he’s helping get the urgent topic of AI alignment in front of a mainstream audience.

Unfortunately, Mike doesn’t engage with the possibility that AI alignment is an intractable technical problem on a 5-20 year timeframe, which I think is more likely than not. That’s the crux of why he and I disagree, and why I see most of his episode as talking past most other intelligent positions people take on AI alignment. I hope he’ll keep engaging with the topic and rethink his position.

00:00 Introduction
03:08 AI Risks and Scenarios
06:42 Superintelligence Arms Race
12:39 The Importance of AI Alignment
18:10 Challenges in Defining Human Values
26:11 The Outer and Inner Alignment Problems
44:00 Transhumanism and AI's Potential
45:42 The Next Step In Evolution
47:54 AI Alignment and Potential Catastrophes
50:48 Scenarios of AI Development
54:03 The AI Alignment Problem
01:07:39 AI as a Helper System
01:08:53 Corporations and AI Development
01:10:19 The Risk of Unaligned AI
01:27:18 Building a Superintelligent AI
01:30:57 Conclusion

Follow Mike Israetel:
https://youtube.com/@MikeIsraetelMakingProgress
https://instagram.com/drmikeisraetel

Get the full Doom Debates experience:
1. Subscribe to this channel: https://youtube.com/@DoomDebates
2. Subscribe to my Substack: https://DoomDebates.com
3. Search "Doom Debates" to subscribe in your podcast player
4. Follow me at https://x.com/liron
...

"The default outcome is... we all DIE" | Liron Shapira on AI risk

July 25, 2023 9:58 am

The full episode of episode six of the Complete Tech Heads podcast, with Liron Shapira, founder, technologist, and self-styled AI doom pointer-outer.

Includes an intro to AI risk, thoughts on a new tier of intelligence, a variety of rebuttals to Marc Andreesen's recent essay on AI, thoughts on how AI might plausibly take over and kill all humans, the rise and danger of AI girlfriends, Open AI's new super alignment team, Elon Musk's latest AI safety venture XAI, and other topics.

#technews #ai #airisks
...

"AI Risk=Jenga" For Humanity, An AI Safety Podcast Episode #17, Liron Shapira Interview

February 28, 2024 3:51 pm

In Episode #17, AI Risk + Jenga, Liron Shapira Interview, John talks with tech CEO and AI Risk Activist Liron Shapira about a broad range of AI risk topics centered around existential risk. Liron likens AI Risk to a game of Jenga, where there are a finite number of pieces, and each one you pull out leaves you one closer to collapse. He says something like Sora, seemingly just a video innovation, could actually end all life on earth.

This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.

For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.

Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

Resources:

PAUSE AI DISCORD
https://discord.gg/pVMWjddaW7

Liron's Youtube Channel:
https://youtube.com/@liron00?si=cqIo5DUPAzHkmdkR

More on rationalism:
https://www.lesswrong.com/

More on California State Senate Bill SB-1047:
https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202320240SB1047&utm_source=substack&utm_medium=email

https://thezvi.substack.com/p/on-the-proposed-california-sb-1047?utm_source=substack&utm_medium=email

Warren Wolf
Warren Wolf, "Señor Mouse" - The Checkout: Live at Berklee
https://youtu.be/OZDwzBnn6uc?si=o5BjlRwfy7yuIRCL
...

AI Doom Debate - Liron Shapira vs. Mikael Koivukangas

May 16, 2024 3:17 am

Mikael thinks the doom argument is loony because he doesn't see computers as being able to have human-like agency any time soon.

I attempted to understand his position and see if I could move him toward a higher P(doom).
...

Toy Model of the AI Control Problem

April 1, 2024 7:04 pm

Slides by Jaan Tallinn
Voiceover explanation by Liron Shapira

Would a superintelligent AI have a survival instinct?
Would it intentionally deceive us?
Would it murder us?

Doomers who warn about these possibilities often get accused of having "no evidence", or just "anthropomorphizing". It's understandable why people could assume that, because superintelligent AI acting on the physical world is such a complex topic, and they're confused about it themselves.

So instead of Artificial Superintelligence (ASI), let's analyze a simpler toy model that leaves no room for anthropomorphism to creep in: an AI that's simply a brute-force search algorithm over actions in a simple gridworld.

Why does the simplest AI imaginable, when you ask it to help you push a box around a grid, suddenly want you to die? ☠️

This toy model will help you understand why a drive to eliminate humans is *not* a handwavy anthropomorphic speculation, but something we expect by default from any sufficiently powerful search algorithm.
...

#10: Liron Shapira - AI doom, FOOM, rationalism, and crypto

December 26, 2023 3:09 am

Liron Shapira is an entrepreneur, angel investor, and CEO of counseling startup Relationship Hero. He’s also a rationalist, advisor for the Machine Intelligence Research Institute and Center for Applied Rationality, and a consistently candid AI doom pointer-outer.
- Liron’s Twitter: https://twitter.com/liron
- Liron’s Substack: https://lironshapira.substack.com/
- Liron’s old blog, Bloated MVP: https://www.bloatedmvp.com

TJP LINKS:
- TRANSCRIPT: https://www.theojaffee.com/p/10-liron-shapira
- Spotify:
- Apple Podcasts:
- RSS: https://api.substack.com/feed/podcast/989123/s/75569/private/129f6344-c459-4581-a9da-dc331677c2f6.rss
- Playlist of all episodes: https://www.youtube.com/playlist?list=PLVN8-zhbMh9YnOGVRT9m0xzqTNGD_sujj
- My Twitter: https://x.com/theojaffee
- My Substack: https://www.theojaffee.com

CHAPTERS:
Intro (0:00)
Non-AI x-risks (0:53)
AI non-x-risks (3:00)
p(doom) (5:21)
Liron vs. Eliezer (12:18)
Why might doom not happen? (15:42)
Elon Musk and AGI (17:12)
Alignment vs. Governance (20:24)
Scott Alexander lowering p(doom) (22:32)
Human minds vs ASI minds (28:01)
Vitalik Buterin and d/acc (33:30)
Carefully bootstrapped alignment (35:22)
GPT vs AlphaZero (41:55)
Belrose & Pope AI Optimism (43:17)
AI doom meets daily life (57:57)
Israel vs. Hamas (1:02:17)
Rationalism (1:06:15)
Crypto (1:14:50)
Charlie Munger and Richard Feynman (1:22:12)
...

Liron reacts to "Intelligence Is Not Enough" by Bryan Cantrill

December 12, 2023 6:04 pm

Bryan Cantrill claims "intelligence isn't enough" for engineering complex systems in the real world.

I wasn't moved by his arguments, but I think they're worth a look, and I appreciate smart people engaging in this discourse.

Bryan's talk: https://www.youtube.com/watch?v=bQfJi7rjuEk
...

Liron Shapira - a conversation about conversations about AI

September 22, 2023 2:33 am

Liron Shapira, tech entrepeneur and angel investor, is also a vocal activist for AI safety. He has engaged in several lively debates on the topic, including with George Hotz and also an online group that calls themselves the "Effective Accelerationists", both of whom disagree with the idea of AI becoming extremely dangerous in the foreseeable future.

In this interview, we discuss hopes and worries regarding the state of AI safety, debate as a means of social change, and what is needed to elevate the discourse on AI.

Liron's debate with George Hotz: https://www.youtube.com/watch?v=lt4vR6XQk-o
Liron's debate with "Beff Jezos" (of e/acc): https://www.youtube.com/watch?v=f71yn1j5Uyc

Alignment Workshop: https://www.youtube.com/@AlignmentWorkshop (referenced at 6:00)
...

There’s No Off Button: AI Existential Risk Interview with Liron Shapira

September 21, 2023 7:51 pm

Liron Shapira is a rationalist, startup founder and angel investor. He studied theoretical Computer Science at UC Berkeley. Since 2007 he's been closely following AI existential risk research through his association with the Machine Intelligence Research Institute and LessWrong.
Computerphile (Rob Miles Channel): https://www.youtube.com/watch?v=3TYT1QfdfsM
...

AI Foom Debate: Liron Shapira vs. Beff Jezos (e/acc) on Sep 1, 2023

September 7, 2023 11:21 pm

My debate from an X Space on Sep 1, 2023 hosted by Chris Prucha ...

AI Doom Debate: Liron Shapira vs. Alexander Campbell

August 5, 2023 6:32 am

What's a goal-to-action mapper? How powerful can it be?

How much do Gödel's Theorem & Halting Problem limit AI's powers?

How do we operationalize a ban on dangerous AI that doesn't also ban other tech like smartphones?
...

Web3, AI & Cybersecurity with Liron Shapira

April 6, 2023 9:46 pm

In this episode of the AdQuick Madvertising podcast, Adam Singer interviews Liron Shapira to talk Web 3 mania, cybersecurity, and go deep into AI existential and business risks and opportunities.

Follow Liron: https://twitter.com/liron

Follow AdQuick
Twitter: https://twitter.com/adquick
LinkedIn: https://linkedin.com/company/adquick
Visit http://adquick.com to get started telling the world your story

Listen on Spotify
https://open.spotify.com/show/03FnBsaXiB1nUsEaIeYr4d

Listen on Apple Podcasts:
https://podcasts.apple.com/us/podcast/adquick-madvertising-podcast/id1670723215

Folow the hosts:
Chris Gadek Twitter: https://twitter.com/dappermarketer
Adam Singer Twitter: https://twitter.com/adamsinger
...

How an AI Doomer Sees The World — Liron on The Human Podcast

March 28, 2025 6:12 am

In this special cross-posted episode of Doom Debates, originally posted here on The Human Podcast, we cover a wide range of topics including the definition of “doom”, P(Doom), various existential risks like pandemics and nuclear threats, and the comparison of rogue AI risks versus AI misuse risks.

00:00 Introduction
01:47 Defining Doom and AI Risks
05:53 P(Doom)
10:04 Doom Debates’ Mission
16:17 Personal Reflections and Life Choices
24:57 The Importance of Debate
27:07 Personal Reflections on AI Doom
30:46 Comparing AI Doom to Other Existential Risks
33:42 Strategies to Mitigate AI Risks
39:31 The Global AI Race and Game Theory
43:06 Philosophical Reflections on a Good Life
45:21 Final Thoughts

Show Notes

The Human Podcast with Joe Murray: https://www.youtube.com/@thehumanpodcastofficial

Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligence

PauseAI, the volunteer organization I’m part of: https://pauseai.info

Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!

Don’t miss the other great AI doom show, For Humanity: https://youtube.com/@ForHumanityAIRisk

Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates
...

Gödel's Theorem Says Intelligence ≠ Power? AI Doom Debate with Alexander Campbell

March 21, 2025 8:04 am

Alexander Campbell claims that having superhuman intelligence doesn’t necessarily translate into having vast power, and that Gödel's Incompleteness Theorem ensures AI can’t get too powerful. I strongly disagree.

Alex has a Master's of Philosophy in Economics from the University of Oxford and an MBA from the Stanford Graduate School of Business, has worked as a quant trader at Lehman Brothers and Bridgewater Associates, and is the founder of Rose AI, a cloud data platform that leverages generative AI to help visualize data.

This debate was recorded in August 2023.


00:00 Intro and Alex’s Background
05:29 Alex's Views on AI and Technology
06:45 Alex’s Non-Doomer Position
11:20 Goal-to-Action Mapping
15:20 Outcome Pump Thought Experiment
21:07 Liron’s Doom Argument
29:10 The Dangers of Goal-to-Action Mappers
34:39 The China Argument and Existential Risks
45:18 Ideological Turing Test
48:38 Final Thoughts

SHOW NOTES
Alexander Campbell’s Twitter: https://x.com/abcampbell

---

Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligence

---

PauseAI, the volunteer organization I’m part of: https://pauseai.info

Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!

---

Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates
...

Alignment is EASY and Roko's Basilisk is GOOD?! AI Doom Debate with Roko Mijic

March 17, 2025 5:09 am

Roko Mijic has been an active member of the LessWrong and AI safety community since 2008. He’s best known for “Roko’s Basilisk”, a thought experiment he posted on LessWrong that made Eliezer Yudkowsky freak out, and years later became the topic that helped Elon Musk get interested in Grimes.

His view on AI doom is that:
* AI alignment is an easy problem
* But the chaos and fighting from building superintelligence poses a high near-term existential risk
* But humanity’s course without AI has an even higher near-term existential risk

While my own view is very different, I’m interested to learn more about Roko’s views and nail down our cruxes of disagreement.

00:00 Introducing Roko
03:33 Realizing that AI is the only thing that matters
06:51 Cyc: AI with “common sense”
15:15 Is alignment easy?
21:19 What’s Your P(Doom)™
25:14 Why civilization is doomed anyway
37:07 Roko’s AI nightmare scenario
47:00 AI risk mitigation
52:07 Market Incentives and AI Safety
57:13 Are RL and GANs good enough for superalignment?
01:00:54 If humans learned to be honest, why can’t AIs?
01:10:29 Is our test environment sufficiently similar to production?
01:23:56 AGI Timelines
01:26:35 Headroom above human intelligence
01:42:22 Roko’s Basilisk
01:54:01 Post-Debate Monologue

SHOW NOTES

Roko’s Twitter: https://x.com/RokoMijic

Explanation of Roko’s Basilisk on LessWrong: https://www.lesswrong.com/w/rokos-basilisk

---

Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligence

PauseAI, the volunteer organization I’m part of: https://pauseai.info

Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!

---

Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates
...

Gödel's Theorem Proves AI Lacks Consciousness?! Liron Reacts to Sir Roger Penrose

March 10, 2025 6:19 am

Sir Roger Penrose is a mathematician, mathematical physicist, philosopher of science, and Nobel Laureate in Physics.

His famous body of work includes Penrose diagrams, twistor theory, Penrose tilings, and the incredibly bold claim that intelligence and consciousness are uncomputable physical phenomena related to quantum wave function collapse.

 Dr. Penrose is such a genius that it's just interesting to unpack his worldview, even if it’s totally implausible. How can someone like him be so wrong? What exactly is it that he's wrong about? It's interesting to try to see the world through his eyes, before recoiling from how nonsensical it looks.

00:00 Episode Highlights
01:29 Introduction to Roger Penrose
11:56 Uncomputability
16:52 Penrose on Gödel's Incompleteness Theorem
19:57 Liron Explains Gödel's Incompleteness Theorem
27:05 Why Penrose Gets Gödel Wrong
40:53 Scott Aaronson's Gödel CAPTCHA
46:28 Penrose's Critique of the Turing Test
48:01 Searle's Chinese Room Argument
52:07 Penrose's Views on AI and Consciousness
57:47 AI's Computational Power vs. Human Intelligence
01:21:08 Penrose's Perspective on AI Risk
01:22:20 Consciousness = Quantum Wave Function Collapse?
01:26:25 Final Thoughts


SHOW NOTES

Source video — Feb 22, 2025 Interview with Roger Penrose on “This Is World” — https://www.youtube.com/watch?v=biUfMZ2dts8

Scott Aaronson’s “Gödel CAPTCHA” — https://www.scottaaronson.com/writings/captcha.html

My recent Scott Aaronson episode — https://www.youtube.com/watch?v=xsGqWeqKjEg

My explanation of what’s wrong with arguing “by definition” — https://www.youtube.com/watch?v=ueam4fq8k8I

---

Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligence

PauseAI, the volunteer organization I’m part of: https://pauseai.info

Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!

---

Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates
...

We Found AI's Preferences — Bombshell New Safety Research — I Explain It Better Than David Shapiro

February 21, 2025 4:13 am

The Center for AI Safety just dropped a fascinating paper — they discovered that today’s AIs like GPT-4 and Claude have preferences! As in, coherent utility functions. We knew this was inevitable, but we didn’t know it was already happening.

In Part I (48 minutes), I react to David Shapiro’s coverage of the paper and push back on many of his points.
In Part II (60 minutes), I explain the paper myself.

00:00 Episode Introduction
05:25 PART I: REACTING TO DAVID SHAPIRO
10:06 Critique of David Shapiro's Analysis
19:19 Reproducing the Experiment
35:50 David's Definition of Coherence
37:14 Does AI have “Temporal Urgency”?
40:32 Universal Values and AI Alignment
49:13 PART II: EXPLAINING THE PAPER
51:37 How The Experiment Works
01:11:33 Instrumental Values and Coherence in AI
01:13:04 Exchange Rates and AI Biases
01:17:10 Temporal Discounting in AI Models
01:19:55 Power Seeking, Fitness Maximization, and Corrigibility
01:20:20 Utility Control and Bias Mitigation
01:21:17 Implicit Association Test
01:28:01 Emailing with the Paper’s Authors
01:43:23 My Takeaway

David’s source video: https://www.youtube.com/watch?v=XGu6ejtRz-0
The research paper: http://emergent-values.ai

Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligence
PauseAI, the volunteer organization I’m part of: https://pauseai.info
Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!

Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates
...

Does AI Competition = AI Alignment? Debate with Gil Mark

February 10, 2025 8:29 am

My friend Gil Mark, who leads generative AI products at LinkedIn, thinks competition among superintelligent AIs will lead to a good outcome for humanity. In his view, the alignment problem becomes significantly easier if we build multiple AIs at the same time and let them compete.

I completely disagree, but I hope you’ll find this to be a thought-provoking episode that sheds light on why the alignment problem is so hard.

00:00 Introduction
02:36 Gil & Liron’s Early Doom Days
04:58: AIs : Humans :: Humans : Ants
08:02 The Convergence of AI Goals
15:19 What’s Your P(Doom)™
19:23 Multiple AIs and Human Welfare
24:42 Gil’s Alignment Claim
42:31 Cheaters and Frankensteins
55:55 Superintelligent Game Theory
01:01:16 Slower Takeoff via Resource Competition
01:07:57 Recapping the Disagreement
01:15:39 Post-Debate Banter

Gil’s LinkedIn: https://www.linkedin.com/in/gilmark/
Gil’s Twitter: https://x.com/gmfromgm

Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligence

PauseAI, the volunteer organization I’m part of: https://pauseai.info

Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!

Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates
...

Toy Model of the AI Control Problem

February 6, 2025 8:30 pm

Why does the simplest AI imaginable, when you ask it to help you push a box around a grid, suddenly want you to die?

AI doomers are often misconstrued as having "no evidence" or just "anthropomorphizing". This toy model will help you understand why a drive to eliminate humans is NOT a handwavy anthropomorphic speculation, but rather something we expect by default from any sufficiently powerful search algorithm.

We’re not talking about AGI or ASI here — we’re just looking at an AI that does brute-force search over actions in a simple grid world.

The slide deck I’m presenting was created by Jaan Tallinn, cofounder of the Future of Life Institute.

00:00 Introduction
01:24 The Toy Model
06:19 Misalignment and Manipulation Drives
12:57 Search Capacity and Ontological Insights
16:33 Irrelevant Concepts in AI Control
20:14 Approaches to Solving AI Control Problems
23:38 Final Thoughts

---

Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligence

PauseAI, the volunteer organization I’m part of: https://pauseai.info

Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!

---

Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates
...

Superintelligent AI vs. Real-World Engineering | Liron Reacts to Bryan Cantrill

January 31, 2025 9:34 pm

Bryan Cantrill, co-founder of Oxide Computer, says engineering in the physical world is too complex for any AI to do it better than teams of human engineers. Success isn’t about intelligence; it’s about teamwork, character and resilience.

I completely disagree.

---

00:00 Introduction
02:03 Bryan’s Take on AI Doom
05:55 The Concept of P(Doom)
08:36 Engineering Challenges and Human Intelligence
15:09 The Role of Regulation and Authoritarianism in AI Control
29:44 Engineering Complexity: A Case Study from Oxide Computer
40:06 The Value of Team Collaboration
46:13 Human Attributes in Engineering
49:33 AI's Potential in Engineering
58:23 Existential Risks and AI Predictions

---

Bryan's original talk: https://www.youtube.com/watch?v=bQfJi7rjuEk

---

Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/watch?v=9CUFbqh16Fg

PauseAI, the volunteer organization I’m part of: https://pauseai.info

Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!

Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates
...

DeepSeek, Brain+AI Merging, Jailbreaking, Fearmongering, Consciousness, Utilitarianism — Live Q&A

January 28, 2025 1:53 am

Thanks for everyone who participated in the live Q&A on Friday!

00:00 Advice for Comp Sci Students
01:14 The $500B Stargate Project
02:36 Eliezer's Recent Podcast
03:07 AI Safety and Public Policy
04:28 AI Disruption and Politics
05:12 DeepSeek and AI Advancements
06:54 Human vs. AI Intelligence
14:00 Consciousness and AI
24:34 Dark Forest Theory and AI
35:31 Investing in Yourself
42:42 Probability of Aliens Saving Us from AI
43:31 Brain-Computer Interfaces and AI Safety
46:19 Debating AI Safety and Human Intelligence
48:50 Nefarious AI Activities and Satellite Surveillance
49:31 Pliny the Prompter Jailbreaking AI
50:20 Can’t vs. Won’t Destroy the World
51:15 How to Make AI Risk Feel Present
54:27 Keeping Doom Arguments On Track
57:04 Game Theory and AI Development Race
01:01:26 Mental Model of Average Non-Doomer
01:04:58 Is Liron a Strict Bayesian and Utilitarian?
01:09:48 Can We Rename “Doom Debates”
01:12:34 The Role of AI Trustworthiness
01:16:48 Minor AI Disasters
01:18:07 Most Likely Reason Things Go Well
01:21:00 Final Thoughts

Previous post where people submitted questions: https://lironshapira.substack.com/p/ai-twitter-beefs-3-marc-andreessen

Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk: https://www.youtube.com/watch?v=9CUFbqh16Fg

PauseAI, the volunteer organization I’m part of: https://pauseai.info

Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!

Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at https://DoomDebates.com and to https://youtube.com/@DoomDebates
...

2,500 Subscriber Live Q&A!

January 24, 2025 10:05 pm

Thanks for being a Doom Debates subscriber. I'm looking forward to chatting with many of you live!

Please subscribe to my Substack and submit your question as a comment here: https://lironshapira.substack.com/p/2500-subscribers-live-q-and-a-ask

I’ll prioritize live questions and questions submitted here by my Substack subscribers above questions from YouTube comments.
...

Mark Zuckerberg, a16z, Yann LeCun, Eliezer Yudkowsky, Roon, Emmett Shear & More | Twitter Beefs #3

January 24, 2025 2:17 pm

Finally a reality show specifically focused on AI-existential-risk-related Twitter discourse!

00:00 Introduction
01:27 Marc Andreessen vs. Sam Altman
09:15 Mark Zuckerberg
35:40 Martin Casado
47:26 Gary Marcus vs. Miles Brundage Bet
58:39 Scott Alexander’s AI Art Turing Test
01:11:29 Roon
01:16:35 Stephen McAleer
01:22:25 Emmett Shear
01:37:20 OpenAI’s “Safety”
01:44:09 Naval Ravikant vs. Eliezer Yudkowsky
01:56:03 Comic Relief
01:58:53 Final Thoughts

SHOW NOTES

Upcoming Live Q&A: https://lironshapira.substack.com/p/2500-subscribers-live-q-and-a-ask

“Make Your Beliefs Pay Rent In Anticipated Experiences” by Eliezer Yudkowsky on LessWrong: https://www.lesswrong.com/posts/a7n8GdKiAZRX86T5A/making-beliefs-pay-rent-in-anticipated-experiences

Scott Alexander’s AI Art Turing Test: https://www.astralcodexten.com/p/how-did-you-do-on-the-ai-art-turing

---
Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/watch?v=9CUFbqh16Fg

PauseAI, the volunteer organization I’m part of: https://pauseai.info

Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!

Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at DoomDebates.com and to https://youtube.com/@DoomDebates
...

Effective Altruism: Amazing or Terrible? EA Debate with Jonas Sota

January 17, 2025 3:11 am

Effective Altruism has been a controversial topic on social media, so today my guest and I are going to settle the question once and for all: Is it good or bad?

Jonas Sota is a Software Engineer at Rippling, BA in Philosophy from UC Berkeley, who’s been observing the Effective Altruism (EA) movement in the San Francisco Bay Area for over a decade… and he’s not a fan.


00:00 Introduction
01:22 Jonas’s Criticisms of EA
03:23 Recoil Exaggeration
05:53 Impact of Malaria Nets
10:48 Local vs. Global Altruism
13:02 Shrimp Welfare
25:14 Capitalism vs. Charity
33:37 Cultural Sensitivity
34:43 The Impact of Direct Cash Transfers
37:23 Long-Term Solutions vs. Immediate Aid
42:21 Charity Budgets
45:47 Prioritizing Local Issues
50:55 The EA Community
59:34 Debate Recap
01:03:57 Announcements

SHOW NOTES

Jonas’s Instagram: @jonas_wanders

Will MacAskill’s famous book, Doing Good Better: https://www.effectivealtruism.org/doing-good-better

Scott Alexander’s excellent post about the people he met at EA Global: https://slatestarcodex.com/2017/08/16/fear-and-loathing-at-effective-altruism-global-2017/

---

Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/watch?v=9CUFbqh16Fg

PauseAI, the volunteer organization I’m part of: https://pauseai.info

Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!

---

Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at https://DoomDebates.com and to https://youtube.com/@DoomDebates
...

God vs. AI Doom: Debate with Bentham's Bulldog

January 15, 2025 4:28 am

Matthew Adelstein, better known as Bentham’s Bulldog on Substack, is a philosophy major at the University of Michigan and an up & coming public intellectual.

He’s a rare combination: Effective Altruist, Bayesian, non-reductionist, theist.

Our debate covers reductionism, evidence for God, the implications of a fine-tuned universe, moral realism, and AI doom.

---

00:00 Introduction
02:56 Matthew’s Research
11:29 Animal Welfare
16:04 Reductionism vs. Non-Reductionism Debate
39:53 The Decline of God in Modern Discourse
46:23 Religious Credences
50:24 Pascal's Wager and Christianity
56:13 Are Miracles Real?
01:10:37 Fine-Tuning Argument for God
01:28:36 Cellular Automata
01:34:25 Anthropic Principle
01:51:40 Mathematical Structures and Probability
02:09:35 Defining God
02:18:20 Moral Realism
02:21:40 Orthogonality Thesis
02:25:53 What's Your P(Doom)™
02:32:02 Moral Philosophy vs. Science
02:45:51 Moral Intuitions
02:53:18 AI and Moral Philosophy
03:08:50 Debate Recap
03:12:20 Show Updates

SHOW NOTES

Matthew’s Substack: https://benthams.substack.com
Matthew's Twitter: https://x.com/BenthamsBulldog
Matthew's YouTube: https://www.youtube.com/@deliberationunderidealcond5105

---

Lethal Intelligence Guide, the ultimate animated video introduction to AI x-risk – https://www.youtube.com/watch?v=9CUFbqh16Fg

PauseAI, the volunteer organization I’m part of — https://pauseai.info/

Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!

---

Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at DoomDebates.com and to https://youtube.com/@DoomDebates
...

Debate with a former OpenAI Research Team Lead — Prof. Kenneth Stanley

January 6, 2025 10:59 am

Prof. Kenneth Stanley is a former Research Science Manager at OpenAI leading the Open-Endedness Team in 2020-2022. Before that, he was a Professor of Computer Science at the University of Central Florida and the head of Core AI Research at Uber. He coauthored Why Greatness Cannot Be Planned: The Myth of the Objective, which argues that as soon as you create an objective, then you ruin your ability to reach it.

In this episode, I debate Ken’s claim that superintelligent AI *won’t* be guided by goals, and then we compare our views on AI doom.

00:00 Introduction
00:45 Ken’s Role at OpenAI
01:53 “Open-Endedness” and “Divergence”
9:32 Open-Endedness of Evolution
21:16 Human Innovation and Tech Trees
36:03 Objectives vs. Open Endedness
47:14 The Concept of Optimization Processes
57:22 What’s Your P(Doom)™
01:11:01 Interestingness and the Future
01:20:14 Human Intelligence vs. Superintelligence
01:37:51 Instrumental Convergence
01:55:58 Mitigating AI Risks
02:04:02 The Role of Institutional Checks
02:13:05 Exploring AI's Curiosity and Human Survival
02:20:51 Recapping the Debate
02:29:45 Final Thoughts

SHOW NOTES

Ken’s home page: https://www.kenstanley.net/
Ken’s Wikipedia: https://en.wikipedia.org/wiki/Kenneth_Stanley
Ken’s Twitter: https://x.com/kenneth0stanley
Ken’s PicBreeder paper: https://wiki.santafe.edu/images/1/1e/Secretan_ecj11.pdf
Ken's book, Why Greatness Cannot Be Planned: The Myth of the Objective: https://www.amazon.com/Why-Greatness-Cannot-Planned-Objective/dp/3319155237

The Rocket Alignment Problem by Eliezer Yudkowsky: https://intelligence.org/2018/10/03/rocket-alignment/

---

Lethal Intelligence Guide, the ultimate animated video introduction to AI x-risk – https://www.youtube.com/watch?v=9CUFbqh16Fg

PauseAI, the volunteer organization I’m part of — https://pauseai.info/

Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!

---

Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at https://DoomDebates.com and to https://youtube.com/@DoomDebates
...

OpenAI o3 and Claude Alignment Faking — How doomed are we?

December 31, 2024 12:06 am

OpenAI just announced o3 and smashed a bunch of benchmarks (ARC-AGI, SWE-bench, FrontierMath)!

A new Anthropic and Redwood Research paper says Claude is resisting its developers’ attempts to retrain its values!

What’s the upshot — what does it all mean for P(doom)?

00:00 Introduction
01:45 o3’s architecture and benchmarks
06:08 “Scaling is hitting a wall” 🤡
13:41 How many new architectural insights before AGI?
20:28 Negative update for interpretability
31:30 Intellidynamics — ***KEY CONCEPT***
33:20 Nuclear control rod analogy
36:54 Sam Altman's misguided perspective
42:40 Claude resisted retraining from good to evil
44:22 What is good corrigibility?
52:42 Claude’s incorrigibility doesn’t surprise me
55:00 Putting it all in perspective

SHOW NOTES

Scott Alexander’s analysis of the Claude incorrigibility result: https://www.astralcodexten.com/p/claude-fights-back and https://www.astralcodexten.com/p/why-worry-about-incorrigible-claude

Zvi Mowshowitz’s analysis of the Claude incorrigibility result: https://thezvi.wordpress.com/2024/12/24/ais-will-increasingly-fake-alignment/

---

PauseAI Website: https://pauseai.info

PauseAI Discord: https://discord.gg/2XXWXvErfA

Say hi to me in the #doom-debates-podcast channel!

---

Watch the Lethal Intelligence video: https://www.youtube.com/watch?v=9CUFbqh16Fg
And check out https://LethalIntelligence.ai! It’s an AWESOME new animated intro to AI risk.

Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at https://DoomDebates.com and to https://youtube.com/@DoomDebates
...

AI Will Kill Us All — Liron Shapira on The Flares

December 27, 2024 5:18 am

This week Liron was interview by Gaëtan Selle on @the-flares about AI doom.
Cross-posted from their channel with permission.
Original source: https://www.youtube.com/watch?v=e4Qi-54I9Zw

0:00:02 Guest Introduction
0:01:41 Effective Altruism and Transhumanism
0:05:38 Bayesian Epistemology and Extinction Probability
0:09:26 Defining Intelligence and Its Dangers
0:12:33 The Key Argument for AI Apocalypse
0:18:51 AI’s Internal Alignment
0:24:56 What Will AI's Real Goal Be?
0:26:50 The Train of Apocalypse
0:31:05 Among Intellectuals, Who Rejects the AI Apocalypse Arguments?
0:38:32 The Shoggoth Meme
0:41:26 Possible Scenarios Leading to Extinction
0:50:01 The Only Solution: A Pause in AI Research?
0:59:15 The Risk of Violence from AI Risk Fundamentalists
1:01:18 What Will General AI Look Like?
1:05:43 Sci-Fi Works About AI
1:09:21 The Rationale Behind Cryonics
1:12:55 What Does a Positive Future Look Like?
1:15:52 Are We Living in a Simulation?
1:18:11 Many Worlds in Quantum Mechanics Interpretation
1:20:25 Ideal Future Podcast Guest for Doom Debates

---

Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at https://DoomDebates.com and to https://youtube.com/@DoomDebates
...

Roon vs. Liron: AI Doom Debate

December 18, 2024 4:21 am

Roon is a member of the technical staff at OpenAI. He’s a highly respected voice on tech Twitter, despite being a pseudonymous cartoon avatar account. In late 2021, he invented the terms “shape rotator” and “wordcel” to refer to roughly visual/spatial/mathematical intelligence vs. verbal intelligence. He is simultaneously a serious thinker, a builder, and a shitposter.

 I'm excited to learn more about Roon, his background, his life, and of course, his views about AI and existential risk.

00:00 Introduction
02:43 Roon’s Quest and Philosophies
22:32 AI Creativity
30:42 What’s Your P(Doom)™
54:40 AI Alignment
57:24 Training vs. Production
01:05:37 ASI
01:14:35 Goal-Oriented AI and Instrumental Convergence
01:22:43 Pausing AI
01:25:58 Crux of Disagreement
1:27:55 Dogecoin
01:29:13 Doom Debates’s Mission

SHOW NOTES

Follow Roon: https://x.com/tszzl

For Humanity: An AI Safety Podcast with John Sherman — https://www.youtube.com/@ForHumanityPodcast

Lethal Intelligence Guide, the ultimate animated video introduction to AI x-risk – https://www.youtube.com/watch?v=9CUFbqh16Fg

PauseAI, the volunteer organization I’m part of — https://pauseai.info/

Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!

---

Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at https://DoomDebates.com and to https://youtube.com/@DoomDebates
...

Scott Aaronson Makes Me Think OpenAI's “Safety” Is Fake, Clueless, Reckless and Insane

December 11, 2024 2:53 am

Today I’m reacting to the recent Scott Aaronson interview on the Win-Win podcast with Liv Boeree and Igor Kurganov.

Prof. Aaronson is the Director of the Quantum Information Center at the University of Texas at Austin. He’s best known for his research advancing the frontier of complexity theory, especially quantum complexity theory, and making complex insights from his field accessible to a wider readership via his blog.

Scott is one of my biggest intellectual influences. His famous "Who Can Name The Bigger Number" essay and his long-running blog are among my best memories of coming across high-quality intellectual content online as a teen. His posts and lectures taught me much of what I know about complexity theory.

Scott recently completed a two-year stint at OpenAI focusing on the theoretical foundations of AI safety, so I was interested to hear his insider account.

Unfortunately, what I heard in the interview confirms my worst fears about the meaning of “safety” at today’s AI companies: that they’re laughably clueless at how to achieve any measure of safety, but instead of doing the adult thing and slowing down their capabilities work, they’re pushing forward recklessly.


00:00 Introducing Scott Aaronson
02:17 Scott's Recruitment by OpenAI
04:18 Scott's Work on AI Safety at OpenAI
08:10 Challenges in AI Alignment
12:05 Watermarking AI Outputs
15:23 The State of AI Safety Research
22:13 The Intractability of AI Alignment
34:20 Policy Implications and the Call to Pause AI
38:18 Out-of-Distribution Generalization
45:30 Moral Worth Criterion for Humans
51:49 Quantum Mechanics and Human Uniqueness
01:00:31 Quantum No-Cloning Theorem
01:12:40 Scott Is Almost An Accelerationist?
01:18:04 Geoffrey Hinton's Proposal for Analog AI
01:36:13 The AI Arms Race and the Need for Regulation
01:39:41 Scott Aronson's Thoughts on Sam Altman
01:42:58 Scott Rejects the Orthogonality Thesis
01:46:35 Final Thoughts
01:48:48 Lethal Intelligence Clip
01:51:42 Outro


SHOW NOTES

Scott’s Interview on Win-Win with Liv Boeree and Igor Kurganov: https://www.youtube.com/watch?v=ANFnUHcYza0

Scott’s Blog: https://scottaaronson.blog

---

PauseAI Website: https://pauseai.info
PauseAI Discord: https://discord.gg/2XXWXvErfA

---

Watch the Lethal Intelligence video and check out LethalIntelligence.ai! It’s an AWESOME new animated intro to AI risk.

---

Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at https://DoomDebates.com and to https://youtube.com/@DoomDebates.
...

Can LLMs Reason? Liron Reacts to Subbarao Kambhampati on Machine Learning Street Talk

November 28, 2024 11:09 am

Today I’m reacting to a July 2024 interview that Prof. Subbarao Kambhampati did on Machine Learning Street Talk.

Rao is a Professor of Computer Science at Arizona State University, and one of the foremost voices making the claim that while LLMs can generate creative ideas, they can’t truly reason.

The episode covers a range of topics including planning, creativity, the limits of LLMs, and why Rao thinks LLMs are essentially advanced N-gram models.

00:00 Introduction
02:54 Essentially N-Gram Models?
10:31 The Manhole Cover Question
20:54 Reasoning vs. Approximate Retrieval
47:03 Explaining Jokes
53:21 Caesar Cipher Performance
01:10:44 Creativity vs. Reasoning
01:33:37 Reasoning By Analogy
01:48:49 Synthetic Data
01:53:54 The ARC Challenge
02:11:47 Correctness vs. Style
02:17:55 AIs Becoming More Robust
02:20:11 Block Stacking Problems
02:48:12 PlanBench and Future Predictions
02:58:59 Final Thoughts


Rao’s interview on Machine Learning Street Talk: https://www.youtube.com/watch?v=y1WnHpedi2A

Rao’s Twitter: https://x.com/rao2z

---

PauseAI Website: https://pauseai.info

PauseAI Discord: https://discord.gg/2XXWXvErfA

Watch the Lethal Intelligence video and check out LethalIntelligence.ai! It’s an AWESOME new animated intro to AI risk.

---

Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at https://DoomDebates.com and to https://youtube.com/@DoomDebates
...

This Yudkowskian Has A 99.999% P(Doom)

November 27, 2024 9:17 am

In this episode of Doom Debates, I discuss AI existential risks with my pseudonymous guest Nethys.

Nethy shares his journey into AI risk awareness, influenced heavily by LessWrong and Eliezer Yudkowsky. We explore the vulnerability of society to emerging technologies, the challenges of AI alignment, and why he believes our current approaches are insufficient, ultimately resulting in 99.999% P(Doom).

00:00 Nethys Introduction
04:47 The Vulnerable World Hypothesis
10:01 What’s Your P(Doom)™
14:04 Nethys’s Banger YouTube Comment
26:53 Living with High P(Doom)
31:06 Losing Access to Distant Stars
36:51 Defining AGI
39:09 The Convergence of AI Models
47:32 The Role of “Unlicensed” Thinkers
52:07 The PauseAI Movement
58:20 Lethal Intelligence Video Clip


SHOW NOTES

Eliezer Yudkowsky’s post on “Death with Dignity”: https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/miri-announces-new-death-with-dignity-strategy

PauseAI Website: https://pauseai.info

PauseAI Discord: https://discord.gg/2XXWXvErfA

Watch the Lethal Intelligence video and check out LethalIntelligence.ai! It’s an AWESOME new animated intro to AI risk.

---
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at https://DoomDebates.com and to https://youtube.com/@DoomDebates
...

Interviews and Talks

Industry Leaders and Notable Public Figures

Explainers

Learn about the issue by some of the best explainers out there

Lethal Intelligence Microblog

Blow your mind with the latest stories

Favorite Microbloggers

Receive important updates!

Your email will not be shared with anyone and won’t be used for any reason besides notifying you when we have important updates or new content

×