Liron Shapira

Liron Shapira is an entrepreneur, angel investor, and has acted as CEO and CTO in various Software startups. A Silicon Valley success story and a father of 3, he has somehow managed in parallel to be a “consistently candid AI doom pointer-outer” (to use his words) and in fact, one of the most influential voices in the AI Safety discourse.

A “contrarian” by nature, his arguments are sharp, to the point, ultra-rational and leaving you satisfied with your conviction that the only realistic exit off the Doom train for now is the “final stop” of pausing training of the next “frontier” models.

He often says that the ideas he represents are not his own and he jokes he is a “stochastic parrot” of other Thinking Giants in the field, but that is him being too humble and in fact he has contributed multiple examples of original thought ( i.e. the Goal Completeness analogy to Turing-Completeness for AGIs, the 3 major evolutionary discontinuities on earth, and more … ).

With his constant efforts to raise awareness for the general public, using his unique no-nonsense, layman-terms style of explaining advanced ideas in a simple way, he has in fact, done more for the future trajectory of events than he will ever know…

In June 2024 he launched an awesome addicting podcast, playfully named “Doom Debates”, which keeps getting better and better, so stay tuned.

In Defense of AI Doomerism | Robert Wright & Liron Shapira

May 16, 2024 8:46 pm

Subscribe to The Nonzero Newsletter at https://nonzero.substack.com
Exclusive Overtime discussion at: https://nonzero.substack.com/p/in-defense-of-ai-doomerism-robert

0:00 Why this pod’s a little odd
2:26 Ilya Sutskever and Jan Leike quit OpenAI—part of a larger pattern?
9:56 Bob: AI doomers need Hollywood
16:02 Does an AI arms race spell doom for alignment?
20:16 Why the “Pause AI” movement matters
24:30 AI doomerism and Don’t Look Up: compare and contrast
26:59 How Liron (fore)sees AI doom
32:54 Are Sam Altman’s concerns about AI safety sincere?
39:22 Paperclip maximizing, evolution, and the AI will to power question
51:10 Are there real-world examples of AI going rogue?
1:06:48 Should we really align AI to human values?
1:15:03 Heading to Overtime

Discussed in Overtime:
Anthropic vs OpenAI.
To survive an AI takeover… be like gut bacteria?
The Darwinian differences between humans and AI.
Should we treat AI like nuclear weapons?
Open source AI, China, and Cold War II.
Why time may be running out for an AI treaty.
How AI agents work (and don't).
GPT-5: evolution or revolution?
The thing that led Liron to AI doom.

Robert Wright (Nonzero, The Evolution of God, Why Buddhism Is True) and Liron Shapira (Pause AI, Relationship Hero). Recorded May 06, 2024. Additional segment recorded May 15, 2024.

Twitter: https://twitter.com/NonzeroPods
...

Liron Shapira on Superintelligence Goals

April 19, 2024 4:29 pm

Liron Shapira joins the podcast to discuss superintelligence goals, what makes AI different from other technologies, risks from centralizing power, and whether AI can defend us from AI.

Timestamps:
00:00 Intelligence as optimization-power
05:18 Will LLMs imitate human values?
07:15 Why would AI develop dangerous goals?
09:55 Goal-completeness
12:53 Alignment to which values?
22:12 Is AI just another technology?
31:20 What is FOOM?
38:59 Risks from centralized power
49:18 Can AI defend us against AI?
56:28 An Apollo program for AI safety
01:04:49 Do we only have one chance?
01:07:34 Are we living in a crucial time?
01:16:52 Would superintelligence be fragile?
01:21:42 Would human-inspired AI be safe?
...

Liron Shapira on the Case for Pausing AI

March 1, 2024 3:00 pm

This week on Upstream, Erik is joined by Liron Shapira to discuss the case against further AI development, why Effective Altruism doesn’t deserve its reputation, and what is misunderstood about nuclear weapons. Upstream is sponsored by Brave: Head to https://brave.com/brave-ads/ and mention “MoZ” when signing up for a 25% discount on your first campaign.
--
RECOMMENDED PODCAST: @History102-qg5oj with @WhatifAltHist
Every week, creator of WhatifAltHist Rudyard Lynch and Erik Torenberg cover a major topic in history in depth -- in under an hour. This season will cover classical Greece, early America, the Vikings, medieval Islam, ancient China, the fall of the Roman Empire, and more. Subscribe on
Spotify: https://open.spotify.com/show/36Kqo3BMMUBGTDo1IEYihm
Apple: https://podcasts.apple.com/us/podcast/history-102-with-whatifalthists-rudyard-lynch-and/id1730633913
--
We’re hiring across the board at Turpentine and for Erik’s personal team on other projects he’s incubating. He’s hiring a Chief of Staff, EA, Head of Special Projects, Investment Associate, and more. For a list of JDs, check out: https://eriktorenberg.com.
--
SPONSOR: BRAVE
Get first-party targeting with Brave’s private ad platform: cookieless and future proof ad formats for all your business needs. Performance meets privacy. Head to https://brave.com/brave-ads/ and mention “MoZ” when signing up for a 25% discount on your first campaign.
--
LINKS
Pause AI: https://pauseai.info/
--
X / TWITTER:
@liron (Liron)
@eriktorenberg (Erik)
@upstream__pod
@turpentinemedia
--
TIMESTAMPS:
(00:00) Intro and Liron's Background
(01:08) Liron's Thoughts on the e/acc Perspective
(03:59) Why Liron Doesn't Want AI to Take Over the World
(06:02) AI and the Future of Humanity
(10:40) AI is An Existential Threat to Humanity
(14:58) On Robin Hanson's Grabby Aliens Theory
(17:22 ) Sponsor - Brave
(18:20 ) AI as an Existential Threat: A Debate
(23:01) AI and the Potential for Global Coordination
(27:03) Liron's Reaction on Vitalik Buterin's Perspective on AI and the Future
(31:16) Power Balance in Warfare: Defense vs Offense
(32:20) Nuclear Proliferation in Modern Society
(38:19) Why There's a Need for a Pause in AI Development
(43:57) Is There Evidence of AI Being Bad?
(44:57) Liron On George Hotz's Perspective
(49:17) Timeframe Between Extinction
(50:53) Humans Are Like Housecats Or White Blood Cells
(53:11) The Doomer Argument
(01:00:00 )The Role of Effective Altruism in Society
(01:03:12) Wrap
--
Upstream is a production from Turpentine
Producer: Sam Kaufman
Editor: Eul Jose Lacierda

For guest or sponsorship inquiries please contact [email protected]

Music license:
VEEBHLBACCMNCGEK
...

Can GPT o1 Reason? | Liron Reacts to Tim Scarfe & Keith Duggar

September 18, 2024 4:06 am

How smart is OpenAI’s new model, o1? What does "reasoning" ACTUALLY mean? What do computability theory and complexity theory tell us about the limitations of LLMs?

Dr. Tim Scarf and Dr. Keith Duggar, hosts of the popular Machine Learning Street Talk podcast, posted an interesting video discussing these issues… FOR ME TO DISAGREE WITH!!!

00:00 Introduction
02:14 Computability Theory
03:40 Turing Machines
07:04 Complexity Theory and AI
23:47 Reasoning
44:24 o1
47:00 Finding gold in the Sahara
56:20 Self-Supervised Learning and Chain of Thought
01:04:01 The Miracle of AI Optimization
01:23:57 Collective Intelligence
01:25:54 The Argument Against LLMs' Reasoning
01:49:29 The Swiss Cheese Metaphor for AI Knowledge
02:02:37 Final Thoughts

Original source: https://www.youtube.com/watch?v=nO6sDk6vO0g

Follow Machine Learning Street Talk: https://www.youtube.com/@MachineLearningStreetTalk


Doom Debates Substack: https://DoomDebates.com

^^^ Seriously subscribe to this! ^^^
...

Arvind Narayanan Makes AI Sound Normal | Liron Reacts

August 29, 2024 11:26 am

Today I’m reacting to the 20VC podcast with Harry Stebbings and Princeton professor Arvind Narayanan: https://www.youtube.com/watch?v=8CvjVAyB4O4

Prof. Narayanan is known for his critical perspective on the misuse and over-hype of artificial intelligence, which he often refers to as “AI snake oil”. Narayanan’s critiques aim to highlight the gap between what AI can realistically achieve, and the often misleading promises made by companies and researchers.

I analyze Arvind’s takes on the comparative dangers of AI and nuclear weapons, the limitations of current AI models, and AI’s trajectory toward being a commodity rather than a superintelligent god.

00:00 Introduction

01:21 Arvind’s Perspective on AI

02:07 Debating AI's Compute and Performance

03:59 Synthetic Data vs. Real Data

05:59 The Role of Compute in AI Advancement

07:30 Challenges in AI Predictions

26:30 AI in Organizations and Tacit Knowledge

33:32 The Future of AI: Exponential Growth or Plateau?

36:26 Relevance of Benchmarks

39:02 AGI

40:59 Historical Predictions

46:28 OpenAI vs. Anthropic

52:13 Regulating AI

56:12 AI as a Weapon

01:02:43 Sci-Fi

01:07:28 Conclusion

Follow Arvind Narayanan: https://x.com/random_walker

Follow Harry Stebbings: https://x.com/HarryStebbings

Join the conversation at https://DoomDebates.com or https://youtube.com/@DoomDebates, suggest topics or guests, and help us spread awareness about the urgent risk of AI extinction. Thanks for watching.
...

Episode #44: “AI P-Doom Debate: 50% vs 99.999%” For Humanity: An AI Risk Podcast

September 4, 2024 3:06 pm

In Episode #44, host John Sherman brings back friends of For Humanity Dr. Roman Yamopolskiy and Liron Shapira. Roman is an influential AI safety researcher, thought leader, and Associate Professor at the University of Louisville. Liron is a tech CEO and host of the excellent Doom Debates podcast. Roman famously holds a 99.999% p-doom, Liron has a nuanced 50%. John starts out at 75%, unrelated to their numbers. Where are you? Did Roman or Liron move you in their direction at all? Let us know in the comments!

LEARN HOW TO HELP RAISE AI RISK AWARENESS IN YOUR COMMUNITY HERE
https://pauseai.info/local-organizing

Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast

EMAIL JOHN: [email protected]

This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.

For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.

Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

RESOURCES:

BUY ROMAN’S NEW BOOK ON AMAZON
https://a.co/d/fPG6lOB

SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
https://www.youtube.com/@DoomDebates

JOIN THE FIGHT, help Pause AI!!!!
Pause AI

Join the Pause AI Weekly Discord Thursdays at 2pm EST
/ discord
https://discord.com/invite/pVMWjddaW7

Max Winga’s “A Stark Warning About Extinction”
https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22

For Humanity Theme Music by Josef Ebner
Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg
Website: https://josef.pictures

BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom

22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
https://www.safe.ai/work/statement-on-ai-risk

Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes
...

David Shapiro Part II: Unaligned Superintelligence Is Totally Fine?

August 22, 2024 10:13 am

Today I’m reacting to David Shapiro’s response to my previous episode: https://www.youtube.com/watch?v=vZhK43kMCeM

And also to David’s latest episode with poker champion & effective altruist Igor Kurganov: https://www.youtube.com/watch?v=XUZ4P3e2iaA

I challenge David's optimistic stance on superintelligent AI inherently aligning with human values. We touch on factors like instrumental convergence and resource competition. David and I continue to clash over whether we should pause AI development to mitigate potential catastrophic risks. I also respond to David's critiques of AI safety advocates.

00:00 Introduction
01:08 David's Response and Engagement
03:02 The Corrigibility Problem
05:38 Nirvana Fallacy
10:57 Prophecy and Faith-Based Assertions
22:47 AI Coexistence with Humanity
35:17 Does Curiosity Make AI Value Humans?
38:56 Instrumental Convergence and AI's Goals
46:14 The Fermi Paradox and AI's Expansion
51:51 The Future of Human and AI Coexistence
01:04:56 Concluding Thoughts

Join the conversation on https://DoomDebates.com or https://youtube.com/@DoomDebates, suggest topics or guests, and help us spread awareness about the urgent risk of extinction. Thanks for watching.
...

Liron Reacts to Mike Israetel's "Solving the AI Alignment Problem"

July 18, 2024 10:56 am

Can a guy who can kick my ass physically also do it intellectually?

Dr. Mike Israetel is a well-known bodybuilder and fitness influencer with over 600,000 Instagram followers, and a surprisingly intelligent commentator on other subjects, including a whole recent episode on the AI alignment problem:

https://www.youtube.com/watch?v=PqJe-O7yM3g

Mike brought up many interesting points that were worth responding to, making for an interesting reaction episode. I also appreciate that he’s helping get the urgent topic of AI alignment in front of a mainstream audience.

Unfortunately, Mike doesn’t engage with the possibility that AI alignment is an intractable technical problem on a 5-20 year timeframe, which I think is more likely than not. That’s the crux of why he and I disagree, and why I see most of his episode as talking past most other intelligent positions people take on AI alignment. I hope he’ll keep engaging with the topic and rethink his position.

00:00 Introduction
03:08 AI Risks and Scenarios
06:42 Superintelligence Arms Race
12:39 The Importance of AI Alignment
18:10 Challenges in Defining Human Values
26:11 The Outer and Inner Alignment Problems
44:00 Transhumanism and AI's Potential
45:42 The Next Step In Evolution
47:54 AI Alignment and Potential Catastrophes
50:48 Scenarios of AI Development
54:03 The AI Alignment Problem
01:07:39 AI as a Helper System
01:08:53 Corporations and AI Development
01:10:19 The Risk of Unaligned AI
01:27:18 Building a Superintelligent AI
01:30:57 Conclusion

Follow Mike Israetel:
https://youtube.com/@MikeIsraetelMakingProgress
https://instagram.com/drmikeisraetel

Get the full Doom Debates experience:
1. Subscribe to this channel: https://youtube.com/@DoomDebates
2. Subscribe to my Substack: https://DoomDebates.com
3. Search "Doom Debates" to subscribe in your podcast player
4. Follow me at https://x.com/liron
...

"The default outcome is... we all DIE" | Liron Shapira on AI risk

July 25, 2023 9:58 am

The full episode of episode six of the Complete Tech Heads podcast, with Liron Shapira, founder, technologist, and self-styled AI doom pointer-outer.

Includes an intro to AI risk, thoughts on a new tier of intelligence, a variety of rebuttals to Marc Andreesen's recent essay on AI, thoughts on how AI might plausibly take over and kill all humans, the rise and danger of AI girlfriends, Open AI's new super alignment team, Elon Musk's latest AI safety venture XAI, and other topics.

#technews #ai #airisks
...

"AI Risk=Jenga" For Humanity, An AI Safety Podcast Episode #17, Liron Shapira Interview

February 28, 2024 3:51 pm

In Episode #17, AI Risk + Jenga, Liron Shapira Interview, John talks with tech CEO and AI Risk Activist Liron Shapira about a broad range of AI risk topics centered around existential risk. Liron likens AI Risk to a game of Jenga, where there are a finite number of pieces, and each one you pull out leaves you one closer to collapse. He says something like Sora, seemingly just a video innovation, could actually end all life on earth.

This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.

For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.

Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

Resources:

PAUSE AI DISCORD
https://discord.gg/pVMWjddaW7

Liron's Youtube Channel:
https://youtube.com/@liron00?si=cqIo5DUPAzHkmdkR

More on rationalism:
https://www.lesswrong.com/

More on California State Senate Bill SB-1047:
https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202320240SB1047&utm_source=substack&utm_medium=email

https://thezvi.substack.com/p/on-the-proposed-california-sb-1047?utm_source=substack&utm_medium=email

Warren Wolf
Warren Wolf, "Señor Mouse" - The Checkout: Live at Berklee
https://youtu.be/OZDwzBnn6uc?si=o5BjlRwfy7yuIRCL
...

AI Doom Debate - Liron Shapira vs. Mikael Koivukangas

May 16, 2024 3:17 am

Mikael thinks the doom argument is loony because he doesn't see computers as being able to have human-like agency any time soon.

I attempted to understand his position and see if I could move him toward a higher P(doom).
...

Toy Model of the AI Control Problem

April 1, 2024 7:04 pm

Slides by Jaan Tallinn
Voiceover explanation by Liron Shapira

Would a superintelligent AI have a survival instinct?
Would it intentionally deceive us?
Would it murder us?

Doomers who warn about these possibilities often get accused of having "no evidence", or just "anthropomorphizing". It's understandable why people could assume that, because superintelligent AI acting on the physical world is such a complex topic, and they're confused about it themselves.

So instead of Artificial Superintelligence (ASI), let's analyze a simpler toy model that leaves no room for anthropomorphism to creep in: an AI that's simply a brute-force search algorithm over actions in a simple gridworld.

Why does the simplest AI imaginable, when you ask it to help you push a box around a grid, suddenly want you to die? ☠️

This toy model will help you understand why a drive to eliminate humans is *not* a handwavy anthropomorphic speculation, but something we expect by default from any sufficiently powerful search algorithm.
...

#10: Liron Shapira - AI doom, FOOM, rationalism, and crypto

December 26, 2023 3:09 am

Liron Shapira is an entrepreneur, angel investor, and CEO of counseling startup Relationship Hero. He’s also a rationalist, advisor for the Machine Intelligence Research Institute and Center for Applied Rationality, and a consistently candid AI doom pointer-outer.
- Liron’s Twitter: https://twitter.com/liron
- Liron’s Substack: https://lironshapira.substack.com/
- Liron’s old blog, Bloated MVP: https://www.bloatedmvp.com

TJP LINKS:
- TRANSCRIPT: https://www.theojaffee.com/p/10-liron-shapira
- Spotify:
- Apple Podcasts:
- RSS: https://api.substack.com/feed/podcast/989123/s/75569/private/129f6344-c459-4581-a9da-dc331677c2f6.rss
- Playlist of all episodes: https://www.youtube.com/playlist?list=PLVN8-zhbMh9YnOGVRT9m0xzqTNGD_sujj
- My Twitter: https://x.com/theojaffee
- My Substack: https://www.theojaffee.com

CHAPTERS:
Intro (0:00)
Non-AI x-risks (0:53)
AI non-x-risks (3:00)
p(doom) (5:21)
Liron vs. Eliezer (12:18)
Why might doom not happen? (15:42)
Elon Musk and AGI (17:12)
Alignment vs. Governance (20:24)
Scott Alexander lowering p(doom) (22:32)
Human minds vs ASI minds (28:01)
Vitalik Buterin and d/acc (33:30)
Carefully bootstrapped alignment (35:22)
GPT vs AlphaZero (41:55)
Belrose & Pope AI Optimism (43:17)
AI doom meets daily life (57:57)
Israel vs. Hamas (1:02:17)
Rationalism (1:06:15)
Crypto (1:14:50)
Charlie Munger and Richard Feynman (1:22:12)
...

Liron reacts to "Intelligence Is Not Enough" by Bryan Cantrill

December 12, 2023 6:04 pm

Bryan Cantrill claims "intelligence isn't enough" for engineering complex systems in the real world.

I wasn't moved by his arguments, but I think they're worth a look, and I appreciate smart people engaging in this discourse.

Bryan's talk: https://www.youtube.com/watch?v=bQfJi7rjuEk
...

Liron Shapira - a conversation about conversations about AI

September 22, 2023 2:33 am

Liron Shapira, tech entrepeneur and angel investor, is also a vocal activist for AI safety. He has engaged in several lively debates on the topic, including with George Hotz and also an online group that calls themselves the "Effective Accelerationists", both of whom disagree with the idea of AI becoming extremely dangerous in the foreseeable future.

In this interview, we discuss hopes and worries regarding the state of AI safety, debate as a means of social change, and what is needed to elevate the discourse on AI.

Liron's debate with George Hotz: https://www.youtube.com/watch?v=lt4vR6XQk-o
Liron's debate with "Beff Jezos" (of e/acc): https://www.youtube.com/watch?v=f71yn1j5Uyc

Alignment Workshop: https://www.youtube.com/@AlignmentWorkshop (referenced at 6:00)
...

There’s No Off Button: AI Existential Risk Interview with Liron Shapira

September 21, 2023 7:51 pm

Liron Shapira is a rationalist, startup founder and angel investor. He studied theoretical Computer Science at UC Berkeley. Since 2007 he's been closely following AI existential risk research through his association with the Machine Intelligence Research Institute and LessWrong.
Computerphile (Rob Miles Channel): https://www.youtube.com/watch?v=3TYT1QfdfsM
...

AI Foom Debate: Liron Shapira vs. Beff Jezos (e/acc) on Sep 1, 2023

September 7, 2023 11:21 pm

My debate from an X Space on Sep 1, 2023 hosted by Chris Prucha ...

AI Doom Debate: Liron Shapira vs. Alexander Campbell

August 5, 2023 6:32 am

What's a goal-to-action mapper? How powerful can it be?

How much do Gödel's Theorem & Halting Problem limit AI's powers?

How do we operationalize a ban on dangerous AI that doesn't also ban other tech like smartphones?
...

Liron Shapira: Web3 mania, Cybersecurity, how AI could brick the universe | Madvertising #6

April 6, 2023 9:46 pm

In this episode of the AdQuick Madvertising podcast, Adam Singer interviews Liron Shapira to talk Web 3 mania, cybersecurity, and go deep into AI existential and business risks and opportunities.

Follow Liron: https://twitter.com/liron

Follow AdQuick
Twitter: https://twitter.com/adquick
LinkedIn: https://linkedin.com/company/adquick
Visit http://adquick.com to get started telling the world your story

Listen on Spotify
https://open.spotify.com/show/03FnBsaXiB1nUsEaIeYr4d

Listen on Apple Podcasts:
https://podcasts.apple.com/us/podcast/adquick-madvertising-podcast/id1670723215

Folow the hosts:
Chris Gadek Twitter: https://twitter.com/dappermarketer
Adam Singer Twitter: https://twitter.com/adamsinger
...

🎙Crypto, AI, and Techno-optimism with Liron Shapira

February 23, 2023 2:00 pm

David speaks with Liron Shapira, Founder&CEO of RelationshipHero.com, a relationship coaching service with over 100,000 clients.

Liron is a technologist, rationalist, and serial entrepreneur, whose skeptical takes about crypto and other bloated startups on [BloatedMVP.com](http://bloatedmvp.com/) have been read over a million times.

If you wanted an opportunity to dig into everything that is at the frontier of technology right now, then this is the episode for you.

#podcast #theknowledge #RationalThinking #DecisionMaking #AngelInvesting #Coinbase #InvestmentCriteria #AxieInfinity #Blockchain #NFTs #MentalModels #AI #TuringCompleteness #OptimisticFuture #VR

📜 Full transcript:
www.theknowledge.io/lironshapira/

👤 Connect with Liron:
Twitter: @liron | https://twitter.com/liron
Website: RelationshipHero.com | http://relationshiphero.com/

📄 Show notes:
0:00 | Intro
03:16 | Exploring Computer Science and Rationality
05:37 | Overcoming the biggest obstacle to rational thinking
07:43 | Two facets of rationality
10:18 | Rational decision making
18:37 | Angel investing: lessons learned and insights on coinbase
21:46 | Criteria for Angel investments
25:08 | The importance of specificity
30:32 | Why Axie Infinity failed
33:51 | Balaji’s reality disruption field
36:31 | Dissecting the idea of disruption
38:34 | Why you shouldn’t follow investment trends
40:09 | Making the case for Blockchain and NFTs
41:53 | Making better decisions
46:47 | Do corrupt countries need Web3?
52:14 | Do you need mental models?
53:26 | What’s the deal with AI?
56:49 | Exploring the future of AI
59:42 | Turing completeness and its implications
01:01:16 | The optimistic future of an AI-enabled World
01:02:48 | What happens when AI takes all the jobs?
01:06:14 | The case for techno-optimism
01:09:53 | The future of VR and AR
01:15:28 | How technology will shape the future

🗣 Mentioned in the show:
Quixey | https://en.wikipedia.org/wiki/Quixey
Les Wrong Sequences | https://www.lesswrong.com/tag/sequences
Predictably Irrational | https://amzn.to/41kE4U4
Dan Ariely | https://danariely.com/
Paul Graham | http://www.paulgraham.com/
Robin Hanson | https://en.wikipedia.org/wiki/Robin_Hanson
The Fermi Paradox | https://www.space.com/25325-fermi-paradox.html
SpaceX | https://www.spacex.com/
Axie Infinity | https://axieinfinity.com/
Helium | https://www.helium.com/
Wifi Coin | https://morioh.com/p/98a74f3fd8c3
LoRaWAN | https://lora-alliance.org/about-lorawan/
LongFi | https://www.data-alliance.net/blog/longfi-wireless-technology-of-the-helium-network/#:~:text=LongFi
Andreessen Horowitz | https://a16z.com/
Chris Dixon | https://cdixon.org/
Balaji Srinivasan | https://twitter.com/balajis
Nasim Taleb | https://twitter.com/nntaleb
NFTs | https://www.theknowledge.io/nfts-explained/
Ideological Turing Tests | https://www.econlib.org/archives/2011/06/the_ideological.html
Bitcoin | https://bitcoin.org/en/
Hollow Abstraction | https://twitter.com/liron/status/1464219456918413313
Machine Intelligence Research Institute | https://intelligence.org/about/
Gary Marcus | http://garymarcus.com/index.html
Steve Wozniak | https://www.britannica.com/biography/Stephen-Gary-Wozniak
Luddites | https://www.historic-uk.com/HistoryUK/HistoryofBritain/The-Luddites/
GitHub Copilot | https://github.com/features/copilot
Mike Maples Jr. | https://twitter.com/m2jr
Palmer Luckey | https://twitter.com/palmerluckey
Oculus | https://www.oculus.com/experiences/quest/
Neuralink | https://neuralink.com/
General Magic | https://en.wikipedia.org/wiki/General_Magic

👨🏾‍💻 About David Elikwu:
David Elikwu FRSA is a serial entrepreneur, strategist, and writer.
David is the founder of The Knowledge, a platform helping people think deeper and work smarter.
🐣 Twitter: @Delikwu / @itstheknowledge
🌐 Website: https://www.davidelikwu.com
📽️ Youtube: https://www.youtube.com/davidelikwu
📸 Instagram: https://www.instagram.com/delikwu/
🕺 TikTok: https://www.tiktok.com/@delikwu
🎙️ Podcast: http://plnk.to/theknowledge
📖 EBook: https://delikwu.gumroad.com/l/manual

My Online Course
🖥️ Career Hyperdrive: https://maven.com/theknowledge/career-hyperdrive
Career Hyperdrive is a live, cohort-based course that helps people find their competitive advantage, gain clarity around their goals and build a future-proof set of mental frameworks so they can live an extraordinary life doing work they love.

The Knowledge
📩 Newsletter: https://theknowledge.io
The Knowledge is a weekly newsletter for people who want to get more out of life. It's full of insights from psychology, philosophy, productivity, and business, all designed to help you think deeper and work smarter.

My Favorite Tools
🎞️ Descript: https://www.descript.com?lmref=alZv3w
📨 Convertkit: https://convertkit.com?lmref=ZkJh_w
🔰 NordVPN: https://go.nordvpn.net/SH2yr
💹 Nutmeg: http://bit.ly/nutmegde
🎧 Audible: https://www.amazon.co.uk/Audible-Free-Trial-Digital-Membership/dp/B00OPA2XFG?tag=davidelikw0ec-21
...

Getting ARRESTED for barricading OpenAI's office to Stop AI — Sam Kirchner and Remmelt Ellen

19 hours ago

Sam Kirchner and Remmelt Ellen, leaders of the Stop AI movement, think the only way to effectively protest superintelligent AI development is with civil disobedience.

Not only are they staging regular protests in front of AI labs, they’re barricading the entrances and blocking traffic, then allowing themselves to be repeatedly arrested.

Is civil disobedience the right strategy to stop AI?


00:00 Introducing Stop AI
00:38 Arrested at OpenAI Headquarters
01:14 Stop AI’s Funding
01:26 Blocking Entrances Strategy
03:12 Protest Logistics and Arrest
08:13 Blocking Traffic
12:52 Arrest and Legal Consequences
18:31 Commitment to Nonviolence
21:17 A Day in the Life of a Protestor
21:38 Civil Disobedience
25:29 Planning the Next Protest
28:09 Stop AI Goals and Strategies
34:27 The Ethics and Impact of AI Protests
42:20 Call to Action

Show Notes
StopAI's next protest is on October 21, 2024 at OpenAI, 575 Florida St, San Francisco, CA 94110.

StopAI Website: https://StopAI.info
StopAI Discord: https://discord.gg/gbqGUt7ZN4

Disclaimer: I (Liron) am not part of StopAI, but I am a member of PauseAI, which also has a website and Discord you can join.

PauseAI Website: https://pauseai.info
PauseAI Discord: https://discord.gg/2XXWXvErfA
There's also a special #doom-debates channel in the PauseAI Discord just for us :)

Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at https://DoomDebates.com and to https://youtube.com/@DoomDebates
...

Q&A #1 Part 2: Stock Picking, Creativity, Types of Doomers, Favorite Books

October 3, 2024 12:31 am

This episode is a continuation of Q&A #1 Part 1 where I answer YOUR questions! Part 1 is here: https://www.youtube.com/watch?v=BXg_HEAEf8s

00:00 Introduction
01:20 Planning for a good outcome?
03:10 Stock Picking Advice
08:42 Dumbing It Down for Dr. Phil
11:52 Will AI Shorten Attention Spans?
12:55 Historical Nerd Life
14:41 YouTube vs. Podcast Metrics
16:30 Video Games
26:04 Creativity
30:29 Does AI Doom Explain the Fermi Paradox?
36:37 Grabby Aliens
37:29 Types of AI Doomers
44:44 Early Warning Signs of AI Doom
48:34 Do Current AIs Have General Intelligence?
51:07 How Liron Uses AI
53:41 Is “Doomer” a Good Term?
57:11 Liron’s Favorite Books
01:05:21 Effective Altruism
01:06:36 The Doom Debates Community


SHOW NOTES

PauseAI Discord: https://discord.gg/2XXWXvErfA

Robin Hanson’s Grabby Aliens theory: https://grabbyaliens.com

Prof. David Kipping’s response to Robin Hanson’s Grabby Aliens: https://www.youtube.com/watch?v=tR1HTNtcYw0

My explanation of “AI completeness”, but actually I made a mistake because the term I previously coined is “goal completeness”: https://www.lesswrong.com/posts/iFdnb8FGRF4fquWnc/goal-completeness-is-like-turing-completeness-for-agi

^ Goal-Completeness (and the corresponding Shapira-Yudkowsky Thesis) might be my best/only original contribution to AI safety research, albeit a small one. Max Tegmark even retweeted it.

a16z’s Ben Horowitz claiming nuclear proliferation is good, actually: https://x.com/liron/status/1690087501548126209


---

Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at https://DoomDebates.com and to https://youtube.com/@DoomDebates
...

Q&A #1 Part 1: College, Asperger's, Elon Musk, Double Crux, Liron's IQ

October 1, 2024 9:16 am

Thanks for being one of the first Doom Debates subscribers and sending in your questions! This episode is Part 1; stay tuned for Part 2 coming soon.

00:00 Introduction
01:17 Is OpenAI a sinking ship?
07:25 College Education
13:20 Asperger's
16:50 Elon Musk: Genius or Clown?
22:43 Double Crux
32:04 Why Call Doomers a Cult?
36:45 How I Prepare Episodes
40:29 Dealing with AI Unemployment
44:00 AI Safety Research Areas
46:09 Fighting a Losing Battle
53:03 Liron’s IQ
01:00:24 Final Thoughts

Explanation of Double Crux

https://www.lesswrong.com/posts/exa5kmvopeRyfJgCy/double-crux-a-strategy-for-mutual-understanding

Best Doomer Arguments

The LessWrong sequences by Eliezer Yudkowsky: https://ReadTheSequences.com
LethalIntelligence.ai — Directory of people who are good at explaining doom
Rob Miles’ Explainer Videos: https://www.youtube.com/c/robertmilesai
For Humanity Podcast with John Sherman - https://www.youtube.com/@ForHumanityPodcast
PauseAI community — https://PauseAI.info — join the Discord!
AISafety.info — Great reference for various arguments

Best Non-Doomer Arguments

Carl Shulman — https://www.dwarkeshpatel.com/p/carl-shulman
Quintin Pope and Nora Belrose — https://optimists.ai
Robin Hanson — https://www.youtube.com/watch?v=dTQb6N3_zu8

How I prepared to debate Robin Hanson

Ideological Turing Test (me taking Robin’s side): https://www.youtube.com/watch?v=iNnoJnuOXFA
Walkthrough of my outline of prepared topics: https://www.youtube.com/watch?v=darVPzEhh-I

Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate. Support the mission by subscribing to my Substack at https://DoomDebates.com and to https://youtube.com/@DoomDebates
...

Arguing "By Definition" | Rationality 101

September 29, 2024 6:29 pm

Welcome to Rationality 101, where I explain a post from Eliezer Yudkowsky's famous LessWrong Sequences: https://www.lesswrong.com/posts/cFzC996D7Jjds3vS9/arguing-by-definition

0:00 - Why syllogisms are fake arguments
0:27 - Socrates syllogism rings hollow
4:01 - Prof. Lee Cronin tries to use a syllogism to support a claim about AI
6:45 - When *can* definitions add value?
8:11 - How the definition of "Optimization Power" adds value
10:29 - The role of definitions in science is to be part of elegant explanatory models that compress our observations
10:50 - A warning to catch yourself trying to argue "by definition"

---
THE DOOM DEBATES MISSION

Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at https://DoomDebates.com and my channel https://youtube.com/@doomdebates
...

Doom Tiffs #1: Amjad Masad, Eliezer Yudkowsky, Roon, Lee Cronin, Naval Ravikant, Martin Casado

September 25, 2024 7:45 am

Finally a reality show specifically focused on people's conduct in the discourse around AI existential risk!

In today’s episode, instead of reacting to a long-form presentation of someone’s position, I’m reporting on the various AI x-risk-related tiffs happening in my part of the world. And by “my part of the world” I mean my Twitter feed.

00:00 Introduction
01:55 Followup to my MSLT reaction episode
03:48 Double Crux
04:53 LLMs: Finite State Automata or Turing Machines?
16:11 Amjad Masad vs. Helen Toner and Eliezer Yudkowsky
17:29 How Will AGI Literally Kill Us?
33:53 Roon
37:38 Prof. Lee Cronin
40:48 Defining AI Creativity
43:44 Naval Ravikant
46:57 Pascal's Scam
54:10 Martin Casado and SB 1047
01:12:26 Final Thoughts

Links referenced in the episode:
* Eliezer Yudkowsky’s interview on the Logan Bartlett Show. Highly recommended: https://www.youtube.com/watch?v=_8q9bjNHeSo
* Double Crux, the core rationalist technique I use when I’m “debating”: https://www.lesswrong.com/posts/exa5kmvopeRyfJgCy/double-crux-a-strategy-for-mutual-understanding

Twitter people referenced:
* Amjad Masad: https://x.com/amasad
* Eliezer Yudkowsky: https://x.com/esyudkowsky
* Helen Toner: https://x.com/hlntnr
* Lee Cronin: https://x.com/leecronin
* Naval Ravikant: https://x.com/naval
* Geoffrey Miller: https://x.com/primalpoly
* Martin Casado: https://x.com/martin_casado
* Your boy: https://x.com/liron

### THE DOOM DEBATES MISSION ###

Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at https://DoomDebates.com and my channel https://youtube.com/@DoomDebates. Thanks for watching.
...

Rationality 101: The Bottom Line

September 21, 2024 4:57 pm

Welcome to Rationality 101, where I explain a post from Eliezer Yudkowsky's famous LessWrong Sequences: https://www.lesswrong.com/posts/34XxbRFe54FycoCDw/the-bottom-line

"Your effectiveness as a rationalist is determined by whichever algorithm actually writes the bottom line of your thoughts." Meditate on that sentence and be enlightened.

This is why we should be wary when a chatbot prints the answer to your question in its first token rather than its last token. The "explanation of their answer" which they write next may not have ANY causal correspondence with the algorithm that wrote their first token.

(Recorded earlier this year.)

Ok now we're getting to the bottom line of this video description. What will it say???

...

...

...please subscribe to my Substack. https://DoomDebates.com
...

Can GPT o1 Reason? | Liron Reacts to Tim Scarfe & Keith Duggar

September 18, 2024 4:06 am

How smart is OpenAI’s new model, o1? What does "reasoning" ACTUALLY mean? What do computability theory and complexity theory tell us about the limitations of LLMs?

Dr. Tim Scarf and Dr. Keith Duggar, hosts of the popular Machine Learning Street Talk podcast, posted an interesting video discussing these issues… FOR ME TO DISAGREE WITH!!!

00:00 Introduction
02:14 Computability Theory
03:40 Turing Machines
07:04 Complexity Theory and AI
23:47 Reasoning
44:24 o1
47:00 Finding gold in the Sahara
56:20 Self-Supervised Learning and Chain of Thought
01:04:01 The Miracle of AI Optimization
01:23:57 Collective Intelligence
01:25:54 The Argument Against LLMs' Reasoning
01:49:29 The Swiss Cheese Metaphor for AI Knowledge
02:02:37 Final Thoughts

Original source: https://www.youtube.com/watch?v=nO6sDk6vO0g

Follow Machine Learning Street Talk: https://www.youtube.com/@MachineLearningStreetTalk


Doom Debates Substack: https://DoomDebates.com

^^^ Seriously subscribe to this! ^^^
...

Yuval Noah Harari's AI Warnings Don't Go Far Enough | Liron Reacts

September 12, 2024 12:18 am

Yuval Noah Harari is a historian, philosopher, and bestselling author known for his thought-provoking works on human history, the future, and our evolving relationship with technology. His 2011 book, Sapiens: A Brief History of Humankind, took the world by storm, offering a sweeping overview of human history from the emergence of Homo sapiens to the present day.

Harari just published a new book which is largely about AI. It’s called Nexus: A Brief History of Information Networks from the Stone Age to AI. Let’s go through the latest interview he did as part of his book tour to see where he stands on AI extinction risk.

00:00 Introduction
04:30 Defining AI vs. non-AI
20:43 AI and Language Mastery
29:37 AI's Potential for Manipulation
31:30 Information is Connection?
37:48 AI and Job Displacement
48:22 Consciousness vs. Intelligence
52:02 The Alignment Problem
59:33 Final Thoughts

Source podcast: https://www.youtube.com/watch?v=78YN1e8UXdM
Follow Yuval Noah Harari: x.com/harari_yuval
Follow Steven Bartlett, host of Diary of a CEO: x.com/StevenBartlett

Subscribe to my Substack: https://DoomDebates.com
Subscribe to this channel: youtube.com/@DoomDebates
...

AI Doom Debate with Roman Yampolskiy: 50% vs. 99.999% P(Doom) — For Humanity Crosspost

September 6, 2024 10:03 pm

Dr. Roman Yampolskiy is the director of the Cyber Security Lab at the University of Louisville. His new book is called AI: Unexplainable, Unpredictable, Uncontrollable.

Roman’s P(doom) from AGI is a whopping 99.999%, vastly greater than my P(doom) of 50%. It’s a rare debate when I’m LESS doomy than my opponent!

This is a cross-post from the For Humanity podcast hosted by John Sherman. For Humanity is basically a sister show of Doom Debates. Highly recommend subscribing!

00:00 John Sherman’s Intro
05:21 Diverging Views on AI Safety and Control
12:24 The Challenge of Defining Human Values for AI
18:04 Risks of Superintelligent AI and Potential Solutions
33:41 The Case for Narrow AI
45:21 The Concept of Utopia
48:33 AI's Utility Function and Human Values
55:48 Challenges in AI Safety Research
01:05:23 Breeding Program Proposal
01:14:05 The Reality of AI Regulation
01:18:04 Concluding Thoughts
01:23:19 Celebration of Life

This episode on For Humanity’s channel: https://www.youtube.com/watch?v=KcjLCZcBFoQ

For Humanity on YouTube: https://www.youtube.com/@ForHumanityPodcast

For Humanity on X: https://x.com/ForHumanityPod

Buy Roman’s new book: https://www.amazon.com/Unexplainable-Unpredictable-Uncontrollable-Artificial-Intelligence/dp/103257626X

Join the conversation at DoomDebates.com or youtube.com/@DoomDebates, suggest topics or guests, and help us spread awareness about the urgent risk of AI extinction. Thanks for watching.
...

Jobst Landgrebe Doesn't Believe In AGI | Liron Reacts

September 4, 2024 9:59 am

Jobst Landgrebe, co-author of "Why Machines Will Never Rule The World: Artificial Intelligence Without Fear", argues that AI is fundamentally limited in achieving human-like intelligence or consciousness due to the complexities of the human brain which are beyond mathematical modeling.

Contrary to my view, Jobst has a very low opinion of what machines will be able to achieve in the coming years and decades.

He’s also a devout Christian, which makes our clash of perspectives funnier.

---

00:00 Introduction
03:12 AI Is Just Pattern Recognition?
06:46 Mathematics and the Limits of AI
12:56 Complex Systems and Thermodynamics
33:40 Transhumanism and Genetic Engineering
47:48 Materialism
49:35 Transhumanism as Neo-Paganism
01:02:38 AI in Warfare
01:11:55 Is This Science?
01:25:46 Conclusion

--

Source podcast: https://www.youtube.com/watch?v=xrlT1LQSyNU

Join the conversation at https://DoomDebates.com or https://youtube.com/@DoomDebates, suggest topics or guests, and help us spread awareness about the urgent risk of AI extinction. Thanks for watching.
...

Arvind Narayanan Makes AI Sound Normal | Liron Reacts

August 29, 2024 11:26 am

Today I’m reacting to the 20VC podcast with Harry Stebbings and Princeton professor Arvind Narayanan: https://www.youtube.com/watch?v=8CvjVAyB4O4

Prof. Narayanan is known for his critical perspective on the misuse and over-hype of artificial intelligence, which he often refers to as “AI snake oil”. Narayanan’s critiques aim to highlight the gap between what AI can realistically achieve, and the often misleading promises made by companies and researchers.

I analyze Arvind’s takes on the comparative dangers of AI and nuclear weapons, the limitations of current AI models, and AI’s trajectory toward being a commodity rather than a superintelligent god.

00:00 Introduction

01:21 Arvind’s Perspective on AI

02:07 Debating AI's Compute and Performance

03:59 Synthetic Data vs. Real Data

05:59 The Role of Compute in AI Advancement

07:30 Challenges in AI Predictions

26:30 AI in Organizations and Tacit Knowledge

33:32 The Future of AI: Exponential Growth or Plateau?

36:26 Relevance of Benchmarks

39:02 AGI

40:59 Historical Predictions

46:28 OpenAI vs. Anthropic

52:13 Regulating AI

56:12 AI as a Weapon

01:02:43 Sci-Fi

01:07:28 Conclusion

Follow Arvind Narayanan: https://x.com/random_walker

Follow Harry Stebbings: https://x.com/HarryStebbings

Join the conversation at https://DoomDebates.com or https://youtube.com/@DoomDebates, suggest topics or guests, and help us spread awareness about the urgent risk of AI extinction. Thanks for watching.
...

Bret Weinstein Bungles It On AI Extinction | Liron Reacts

August 27, 2024 9:19 pm

Today I’m reacting to the Bret Weinstein’s recent appearance on the Diary of a CEO podcast with Steven Bartlett: https://www.youtube.com/watch?v=_cFu-b5lTMU

Bret is an evolutionary biologist known for his outspoken views on social and political issues. He gets off to a promising start, saying that AI risk should be “top of mind” and poses “five existential threats”. But his analysis is shallow and ad-hoc, and ends in him dismissing the idea of trying to use regulation as a tool to save our species from a recognized existential threat.

I believe we can raise the level of AI doom discourse by calling out these kinds of basic flaws in popular media on the subject.

00:00 Introduction
02:02 Existential Threats from AI
03:32 The Paperclip Problem
04:53 Moral Implications of Ending Suffering
06:31 Inner vs. Outer Alignment
08:41 AI as a Tool for Malicious Actors
10:31 Attack vs. Defense in AI
18:12 The Event Horizon of AI
21:42 Is Language More Prime Than Intelligence?
38:38 AI and the Danger of Echo Chambers
46:59 AI Regulation
51:03 Mechanistic Interpretability
56:52 Final Thoughts

Follow Bret Weinstein: x.com/BretWeinstein
Follow Steven Bartlett: x.com/StevenBartlett

Join the conversation at https://DoomDebates.com or https://youtube.com/@DoomDebates, suggest topics or guests, and help us spread awareness about the urgent risk of AI extinction. Thanks for watching.
...

SB 1047 AI Regulation Debate: Holly Elmore vs. Greg Tanaka

August 26, 2024 9:12 pm

California's SB 1047 bill, authored by State Senator Scott Wiener, is the leading attempt by a US state to regulate catastrophic risks from frontier AI in the wake of President Biden's 2023 AI Executive Order. Should it become law?

Today’s debate:
Holly Elmore, Executive Director of Pause AI US, representing Pro- SB 1047
Greg Tanaka, Palo Alto City Councilmember, representing Anti- SB 1047

Key Bill Supporters: Geoffrey Hinton, Yoshua Bengio, Anthropic, PauseAI, and about a 2/3 majority of California voters surveyed.

Key Bill Opponents: OpenAI, Google, Meta, Y Combinator, Andreessen Horowitz

---

Greg mentioned that the "Supporters & Opponents" tab on this page lists organizations who registered their support and opposition. The vast majority of organizations listed here registered support against the bill: https://digitaldemocracy.calmatters.org/bills/ca_202320240sb1047

Holly mentioned surveys of California voters showing popular support for the bill:
1. Center for AI Safety survey shows 77% support: https://drive.google.com/file/d/1wmvstgKo0kozd3tShPagDr1k0uAuzdDM/view
2. Future of Life Institute survey shows 59% support: https://futureoflife.org/ai-policy/poll-shows-popularity-of-ca-sb1047/

---

Follow Holly: https://x.com/ilex_ulmus
Follow Greg: https://x.com/GregTanaka

Join the conversation on https://DoomDebates.com or https://youtube.com/@DoomDebates, suggest topics or guests, and help us spread awareness about the urgent risk of extinction. Thanks for watching.
...

David Shapiro Part II: Unaligned Superintelligence Is Totally Fine?

August 22, 2024 10:13 am

Today I’m reacting to David Shapiro’s response to my previous episode: https://www.youtube.com/watch?v=vZhK43kMCeM

And also to David’s latest episode with poker champion & effective altruist Igor Kurganov: https://www.youtube.com/watch?v=XUZ4P3e2iaA

I challenge David's optimistic stance on superintelligent AI inherently aligning with human values. We touch on factors like instrumental convergence and resource competition. David and I continue to clash over whether we should pause AI development to mitigate potential catastrophic risks. I also respond to David's critiques of AI safety advocates.

00:00 Introduction
01:08 David's Response and Engagement
03:02 The Corrigibility Problem
05:38 Nirvana Fallacy
10:57 Prophecy and Faith-Based Assertions
22:47 AI Coexistence with Humanity
35:17 Does Curiosity Make AI Value Humans?
38:56 Instrumental Convergence and AI's Goals
46:14 The Fermi Paradox and AI's Expansion
51:51 The Future of Human and AI Coexistence
01:04:56 Concluding Thoughts

Join the conversation on https://DoomDebates.com or https://youtube.com/@DoomDebates, suggest topics or guests, and help us spread awareness about the urgent risk of extinction. Thanks for watching.
...

Maciej Ceglowski (Pinboard) Rejects AI Doomerism | Liron Reacts

August 19, 2024 11:19 am

Maciej Ceglowski is an entrepreneur and owner of the bookmarking site Pinboard. I’ve been a long-time fan of his sharp, independent-minded blog posts and tweets.

In this episode, I react to this great 2016 talk he gave at WebCamp Zagreb: https://www.youtube.com/watch?v=kErHiET5YPw

Maciej's talk was impressively ahead of its time, as the AI doom debate really only heated up in the last few years.

00:00 Introduction

02:13 Historical Analogies and AI Risks

05:57 The Premises of AI Doom

08:25 Mind Design Space and AI Optimization

15:58 Recursive Self-Improvement and AI

39:44 Arguments Against Superintelligence

45:20 Mental Complexity and AI Motivations

47:12 The Argument from Just Look Around You

49:27 The Argument from Life Experience

50:56 The Argument from Brain Surgery

53:57 The Argument from Childhood

58:10 The Argument from Robinson Crusoe

01:00:17 Inside vs. Outside Arguments

01:06:45 Transhuman Voodoo and Religion 2.0

01:11:24 Simulation Fever

01:18:00 AI Cosplay and Ethical Concerns

01:28:51 Concluding Thoughts and Call to Action

Follow Maciej: https://x.com/pinboard

Follow Doom Debates:
* https://youtube.com/@DoomDebates
* https://DoomDebates.com
* https://x.com/liron
* Search “Doom Debates” in your podcast player
...

David Shapiro Doesn't Get PauseAI | Liron Reacts

August 16, 2024 8:57 am

Today I’m reacting to David Shapiro’s latest YouTube video: https://www.youtube.com/watch?v=Nf_9SuPxlqo

In my opinion, every plan that doesn’t evolve pausing frontier AGI capabilities development now is reckless, or at least every plan that doesn’t prepare to pause AGI once we see a “warning shot” that enough people agree is terrifying.

We’ll go through David’s argument point by point, to see if there are any good points about why maybe pausing AI might actually be a bad idea.

00:00 Introduction
01:16 The Pause AI Movement
03:03 Eliezer Yudkowsky’s Epistemology
12:56 Rationalist Arguments and Evidence
24:03 Public Awareness and Legislative Efforts
28:38 The Burden of Proof in AI Safety
31:02 Arguments Against the AI Pause Movement
34:20 Nuclear Proliferation vs. AI
34:48 Game Theory and AI
36:31 Opportunity Costs of an AI Pause
44:18 Axiomatic Alignment
47:34 Regulatory Capture and Corporate Interests
56:24 The Growing Mainstream Concern for AI Safety

Follow David:
https://youtube.com/@DaveShap
https://x.com/DaveShapi

Follow Doom Debates:
https://youtube.com/@DoomDebates
https://doomdebates.com
https://x.com/liron
...

David Brooks's Non-Doomer Non-Argument in the NY Times | Liron Reacts

August 15, 2024 2:56 am

Cross-posting today's episode of the For Humanity podcast with John Sherman where I was a guest.

For Humanity is basically the sister podcast to Doom Debates. We have the same mission to raise awareness of the urgent AI extinction threat, and building grassroots support for pausing new AI capabilities development until it's safe.

Check it out and subscribe: @ForHumanityPodcast
Follow it on X: https://x.com/ForHumanityPod

The David Brooks NYT article is here: https://www.nytimes.com/interactive/2024/07/31/opinion/ai-fears.html
...

Richard Sutton Dismisses AI Extinction Fears with Simplistic Arguments | Liron Reacts

August 13, 2024 6:45 pm

The "peace", "decentralization" and "cooperation" that can be, unburdened by the question of whether any plausible equilibrium scenario maps to these platitudes…

Dr. Richard Sutton is a Professor of Computing Science at the University of Alberta known for his pioneering work on reinforcement learning, and his “bitter lesson” that scaling up an AI’s data and compute gives better results than having programmers try to handcraft or explicitly understand how the AI works.

Dr. Sutton famously claims that AIs are the “next step in human evolution”, a positive force for progress rather than a catastrophic extinction risk comparable to nuclear weapons.

Let’s examine Sutton’s recent interview with Daniel Faggella to understand his crux of disagreement with the AI doom position.

---

00:00 Introduction

03:33 The Worthy vs. Unworthy AI Successor

04:52 “Peaceful AI”

07:54 “Decentralization”

11:57 AI and Human Cooperation

14:54 Micromanagement vs. Decentralization

24:28 Discovering Our Place in the World

33:45 Standard Transhumanism

44:29 AI Traits and Environmental Influence

46:06 The Importance of Cooperation

48:41 The Risk of Superintelligent AI

57:25 The Treacherous Turn and AI Safety

01:04:28 The Debate on AI Control

01:13:50 The Urgency of AI Regulation

01:21:41 Final Thoughts and Call to Action

---

Original interview with Daniel Faggella: youtube.com/watch?v=fRzL5Mt0c8A

Follow Richard Sutton: x.com/richardssutton

Follow Daniel Faggella: x.com/danfaggella

Follow Liron: x.com/liron

Subscribe to my YouTube channel for full episodes and other bonus content: youtube.com/@DoomDebates
...

AI Doom Debate: “Cards Against Humanity” Co-Creator David Pinsof

August 8, 2024 8:50 am

David Pinsof is co-creator of the wildly popular Cards Against Humanity and a social science researcher at UCLA Social Minds Lab. He writes a blog called “Everything Is Bullshit”.

He sees AI doomers as making many different questionable assumptions, and he sees himself as poking holes in those assumptions.

I don’t see it that way at all; I think the doom claim is the “default expectation” we ought to have if we understand basic things about intelligence.

At any rate, I think you’ll agree that his attempt to poke holes in my doom claims on today’s podcast is super good-natured and interesting.

00:00 Introducing David Pinsof

04:12 David’s P(doom)

05:38 Is intelligence one thing?

21:14 Humans vs. other animals

37:01 The Evolution of Human Intelligence

37:25 Instrumental Convergence

39:05 General Intelligence and Physics

40:25 The Blind Watchmaker Analogy

47:41 Instrumental Convergence

01:02:23 Superintelligence and Economic Models

01:12:42 Comparative Advantage and AI

01:19:53 The Fermi Paradox for Animal Intelligence

01:34:57 Closing Statements

Follow David: https://x.com/DavidPinsof
Follow Liron: https://x.com/liron

Thanks for watching. You can support Doom Debates by subscribing to the DoomDebates.com Substack, the YouTube channel, subscribing in your podcast player, and leaving a review on Apple Podcasts.
...

P(Doom) Estimates Shouldn't Inform Policy?? Liron Reacts to Sayash Kapoor

August 5, 2024 8:24 pm

Princeton Comp Sci Ph.D. candidate Sayash Kapoor co-authored a blog post last week with his professor Arvind Narayanan called "AI Existential Risk Probabilities Are Too Unreliable To Inform Policy".

While some non-doomers embraced the arguments, I see it as contributing nothing to the discourse besides demonstrating a popular failure mode: a simple misunderstanding of the basics of Bayesian epistemology.

I break down Sayash's recent episode of Machine Learning Street Talk point-by-point to analyze his claims from the perspective of the one true epistemology: Bayesian epistemology.

00:00 Introduction
03:40 Bayesian Reasoning
04:33 Inductive vs. Deductive Probability
05:49 Frequentism vs Bayesianism
16:14 Asteroid Impact and AI Risk Comparison
28:06 Quantification Bias
31:50 The Extinction Prediction Tournament
36:14 Pascal's Wager and AI Risk
40:50 Scaling Laws and AI Progress
45:12 Final Thoughts

My source material is Sayash's episode of Machine Learning Street Talk: https://www.youtube.com/watch?v=BGvQmHd4QPE

Recommended reading:
https://www.astralcodexten.com/p/in-continued-defense-of-non-frequentist

Sayash's blog post that he was being interviewed about is called "AI existential risk probabilities are too unreliable to inform policy": https://www.aisnakeoil.com/p/ai-existential-risk-probabilities

Follow Sayash: https://x.com/sayashk
...

Interviews and Talks

Industry Leaders and Notable Public Figures

Explainers

Learn about the issue by some of the best explainers out there

Lethal Intelligence Microblog

Blow your mind with the latest stories

Receive important updates!

Your email will not be shared with anyone and won’t be used for any reason besides notifying you when we have important updates or new content

×