Nick Bostrom

Nick Bostrom is a Swedish-born philosopher and polymath with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence. He has been a Professor at Oxford University UK and founding director of Future of Humanity Institute (2005—2024) and currently a Principal Researcher at the Macrostrategy Research Initiative

Bostrom is one of the most important and globally recognisable contemporary philosophers.
He is the author of some 200 publications, including Anthropic Bias, Global Catastrophic Risks, Human Enhancement, and, most recently, the book Superintelligence: Paths, Dangers, Strategies.

He is known for his pioneering work on existential risk, the simulation argument, the anthropic principle, AI safety, human enhancement ethics, whole brain emulation, superintelligence risks, the reversal test and global consequentialism amongst others.

His writings have been translated into more than 30 languages; he is a repeat main-stage TED speaker; He has received the Eugene R. Gannon Award for the Continued Pursuit of Human Advancement;
He has been named One of the Top 100 Global Thinkers by Foreign Policy Magazine twice and and was included in Prospect’s World Thinkers list, the youngest person in the top 15.

He is one of the most-cited philosophers in the world, and has been referred to as “the Swedish superbrain”.

Nick Bostrom: How AI will lead to tyranny

November 10, 2023 7:05 pm

📰 Subscribe to UnHerd today at: http://unherd.com/join

UnHerd's Flo Read meets Nick Bostrom.

In the last year, artificial intelligence has progressed from a science-fiction fantasy to an impending reality. We can see its power in everything from online gadgets to whispers of a new, “post-singularity” tech frontier — as well as in renewed fears of an AI takeover.

One intellectual who anticipated this decades ago is Nick Bostrom, a Swedish philosopher at the University of Oxford and director of its Future of Humanity Institute. He joined UnHerd’s Florence Read to discuss extinction, the risk of government surveillance and how to use AI for the benefit of humanity.

Listen to the podcast: https://plnk.to/unherd?to=page

Follow UnHerd on social media:
Twitter: https://twitter.com/unherd
Facebook: https://www.facebook.com/unherd/
Instagram: https://www.instagram.com/unherd/
TikTok: https://www.tiktok.com/@unherdtv


// TIMECODES //
00:00 - 01:01 Introduction
01:01 - 03:42 What does Nick Bostrom mean by existential risk?
03:42 - 05:30 Covid-19 and waning trust in global institutions
05:30 - 08:57 The evolution of AI
08:57- 10:46 Elon Musk’s Grok
10:46 - 13:39 Government interventions in AI
13:39 - 17:10 AI surveillance
17:10 - 18:40 Hyperrealistic propaganda, deep fakes and scepticism
18:40 - 21:15 AI’s agency beyond human intervention
21:15 - 27:06 The West’s liberal values and AI’s utilitarian bent
27:06 - 28:45 AI ethics and potential risks in warfare
28:45 - 34:45 How can we mitigate AI risks?
34:45 - 42:25 The upsides to AI
42:25 - 43:07 Concluding thoughts


#UnHerd #NickBostrom #AI
...

Are We Headed For AI Utopia Or Disaster? - Nick Bostrom

June 29, 2024 5:00 pm

Nick Bostrom is a philosopher, professor at the University of Oxford and an author

For generations, the future of humanity was envisioned as a sleek, vibrant utopia filled with remarkable technological advancements where machines and humans would thrive together. As we stand on the supposed brink of that future, it appears quite different from our expectations. So what does humanity's future actually hold?

Expect to learn what it means to live in a perfectly solved world, whether we are more likely heading toward a utopia or a catastrophe, how humans will find a meaning in a world that no longer needs our contributions, what the future of religion could look like, a breakdown of all the different stages we will move through on route to a final utopia, the current state of AI safety & risk and much more...

-

00:00 Is Nick Hopeful About AI?
03:20 How We Can Get AI Right
07:07 The Moral Status of Non-Human Intelligences
17:36 Different Types of Utopia
19:38 The Human Experience in a Solved World
31:32 Using AI to Satisfy Human Desires
43:25 Current Things That Would Stay in Utopia
49:54 The Value of Daily Struggles
55:07 Implications of Extreme Human Longevity
1:00:19 Constraints That We Can’t Get Past
1:07:27 How Important is This Time for Humanity’s Future?
1:13:40 Biggest AI Development Surprises
1:21:24 Current State of AI Safety
1:28:06 Where to Find Nick

-

Get access to every episode 10 hours before YouTube by subscribing for free on Spotify - https://spoti.fi/2LSimPn or Apple Podcasts - https://apple.co/2MNqIgw

Get my free Reading List of 100 life-changing books here - https://chriswillx.com/books/

Try my productivity energy drink Neutonic here - https://neutonic.com/modernwisdom

-

Get in touch in the comments below or head to...
Instagram: https://www.instagram.com/chriswillx
Twitter: https://www.twitter.com/chriswillx
Email: https://chriswillx.com/contact/
...

What happens when our computers get smarter than we are? | Nick Bostrom

April 27, 2015 6:15 pm

Artificial intelligence is getting smarter by leaps and bounds — within this century, research suggests, a computer AI could be as "smart" as a human being. And then, says Nick Bostrom, it will overtake us: "Machine intelligence is the last invention that humanity will ever need to make." A philosopher and technologist, Bostrom asks us to think hard about the world we're building right now, driven by thinking machines. Will our smart machines help to preserve humanity and our values — or will they have values of their own?

TEDTalks is a daily video podcast of the best talks and performances from the TED Conference, where the world's leading thinkers and doers give the talk of their lives in 18 minutes (or less). Look for talks on Technology, Entertainment and Design -- plus science, business, global issues, the arts and much more.
Find closed captions and translated subtitles in many languages at http://www.ted.com/translate

Follow TED news on Twitter: http://www.twitter.com/tednews
Like TED on Facebook: https://www.facebook.com/TED

Subscribe to our channel: http://www.youtube.com/user/TEDtalksDirector
...

Path To AGI, AI Alignment, Digital Minds | Nick Bostrom and Juan Benet | Breakthroughs in Computing

October 25, 2022 9:02 pm

Protocol Labs founder Juan Benet speaks with Nick Bostrom, a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, as well as philosophy. He is also the most-cited professional philosopher in the world under the age of 50.

Breakthroughs in Computing is a speaker series focused on how technology will shape society in the next 5-25 years.

Join us in person https://breakthroughs-in-computing.labweek.io/ or sign up for future livestreams this week: https://www.youtube.com/c/ProtocolLabs
...

Nick Bostrom on Superintelligence and the Future of AI | Closer To Truth Chats

April 4, 2024 5:00 pm

Philosopher Nick Bostrom discusses his new book, "Deep Utopia: Life and Meaning in a Solved World", where he asks: In the face of incredible technological advances, what is the point of human existence? Will AI make our life and labor obsolete? In a "solved world," where would we find meaning and purpose?

Bostrom's book, Deep Utopia, is available for purchase now: https://shorturl.at/guWX8

Nick Bostrom is a Professor at Oxford University, where he is the founding director of the Future of Humanity Institute. He is the world's most cited philosopher aged 50 or under.

Watch more Closer To Truth Chats here: https://t.ly/jJI7e

Closer To Truth, hosted by Robert Lawrence Kuhn and directed by Peter Getzels, presents the world’s greatest thinkers exploring humanity’s deepest questions. Discover fundamental issues of existence. Engage new and diverse ways of thinking. Appreciate intense debates. Share your own opinions. Seek your own answers.

00:00:00 Exploration of technological advancements and philosophical considerations in the era of AI
00:06:16 Exploring extreme conditions in physics to understand laws and implications, not for practical solutions
00:11:24 The concept of post-scarcity Utopia and its appeal to individuals seeking abundance and ease in material resources
00:17:00 Considering the idea of experiencing pleasure and joy directly through advanced means like super drugs or neural implants
00:22:27 Existential questions around boredom in future extreme conditions and eschatological concepts
00:28:13 Potential for increasing subjective and objective interestingness in human lives
00:33:34 Comparison of Bostrom's Solved World with Marxist pure communism and its implications
00:38:55 Exploration of various moral theories and values in relation to shaping the future
00:44:12 Implications of a world filled with sentient beings and the reasons behind such a scenario
00:49:43 Implications of conscious experiences on artificial intelligence and technology manipulation of organic brains
00:55:04 Implications of indefinite lifespan on human mind evolution and identity
01:00:23 Risk assessment of AI superintelligence in the next 100 years
01:06:14 peculating on the existence of intelligent life in the universe, likelihood of AI consciousness within a thousand years, and the multi-dimensional nature of consciousness
...

Superintelligence | Nick Bostrom | Talks at Google

September 23, 2014 4:19 am

Superintelligence asks the questions: What happens when machines surpass humans in general intelligence? Will artificial agents save or destroy us? Nick Bostrom lays the foundation for understanding the future of humanity and intelligent life.

The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. If machine brains surpassed human brains in general intelligence, then this new superintelligence could become extremely powerful - possibly beyond our control. As the fate of the gorillas now depends more on humans than on the species itself, so would the fate of humankind depend on the actions of the machine superintelligence.

But we have one advantage: we get to make the first move. Will it be possible to construct a seed Artificial Intelligence, to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation?

This profoundly ambitious and original book breaks down a vast track of difficult intellectual terrain. After an utterly engrossing journey that takes us to the frontiers of thinking about the human condition and the future of intelligent life, we find in Nick Bostrom's work nothing less than a reconceptualization of the essential task of our time.

This talk was hosted by Boris Debic.
...

Nick Bostrom on the Meaning of Life in a World where AI can do Everything for Us

April 14, 2024 7:12 pm

This episode is sponsored by Netsuite by Oracle, the number one cloud financial system, streamlining accounting, financial management, inventory, HR, and more.

NetSuite is offering a one-of-a-kind flexible financing program. Head to https://netsuite.com/EYEONAI to know more.


Venture into the future of AI with Nick Bostrom, a philosopher at the University of Oxford known for his work on existential risk, the anthropic principle, human enhancement ethics, whole brain emulation, superintelligence risks, and the reversal test.

On episode #181 of Eye on AI, Nick Bostrom, explores the existential and societal implications of AI reaching and surpassing human capabilities. As we contemplate a world where all tasks are performed by AI, Nick discusses the potential for a 'technologically solved' society and its impact on human purpose and motivation.

Join us as Nick provides insights into his latest book, "Deep Utopia," where he questions how humans will find meaning when artificial intelligence handles every aspect of labor and creativity. He elaborates on the risks, ethical considerations, and philosophical dilemmas we face as AI continues to evolve at an unprecedented pace.

This episode is an essential exploration of the shifts AI may bring to our societal structures, labour markets, and individual lives.

If you find yourself intrigued by the philosophical journey into AI's potential to redefine humanity, hit the like button and subscribe for more thoughtful discussions on the future landscapes shaped by artificial intelligence.


Stay Updated:

Craig Smith Twitter: https://twitter.com/craigss

Eye on A.I. Twitter: https://twitter.com/EyeOn_AI


(00:00) Introduction and the Concept of a 'Solved World'
(03:05) Nick Bostrom's Background
(04:09) Evolving AI Landscape Post-'Superintelligence'
(06:06) Exploring the Anthropomorphism in Modern AI
(08:02) Predictions and the 'Hockey Stick' Graph
(10:13) AI Safety and Public Perception
(12:58) Deep Utopia and the Search for Meaning
(15:46) Life in a Technologically Mature World
(18:17) Existential Malaise in Modern Society
(20:43) The Potential of Technological Maturity
(23:51) Philosophical Implications of a Solved World
(28:20) Engineering Happiness and Neurological Adjustments
(32:18) Remaining Human Tasks and Cultural Values
(35:45) The Future of Humanity
(47:03) Closing Remarks and Sponsor Message
...

How civilization could destroy itself -- and 4 ways we could prevent it | Nick Bostrom

January 17, 2020 10:48 pm

Visit http://TED.com to get our entire library of TED Talks, transcripts, translations, personalized Talk recommendations and more.

Humanity is on its way to creating a "black ball": a technological breakthrough that could destroy us all, says philosopher Nick Bostrom. In this incisive, surprisingly light-hearted conversation with Head of TED Chris Anderson, Bostrom outlines the vulnerabilities we could face if (or when) our inventions spiral beyond our control -- and explores how we can prevent our future demise.

The TED Talks channel features the best talks and performances from the TED Conference, where the world's leading thinkers and doers give the talk of their lives in 18 minutes (or less). Look for talks on Technology, Entertainment and Design -- plus science, business, global issues, the arts and more. You're welcome to link to or embed these videos, forward them to others and share these ideas with people you know. For more information on using TED for commercial purposes (e.g. employee learning, in a film or online course), submit a Media Request here: http://media-requests.TED.com

Follow TED on Twitter: http://twitter.com/TEDTalks
Like TED on Facebook: http://facebook.com/TED

Subscribe to our channel: http://youtube.com/TED
...

Nick Bostrom - XRisk - Superintelligence, Human Enhancement & the Future of Humanity Institute

February 24, 2013 2:15 am

Nick Bostrom discusses Existential Risk, Superintelligence, and the Future of Humanity Institute
http://www.fhi.ox.ac.uk Transcription of this interview available here: http://hplusmagazine.com/2013/03/12/interivew-with-nick-bostrom/
Questions & talking points: - What does Existential Risk mean and why is it an important topic? - Why the focus on Machine Intelligence? - Eugenics & Genetic Selection? - Germline Gene Therapy verses Somatic Gene Therapy? - Machines Enhancement vs Human Enhancement. - Solving the Control Problem - Transhumanism and its History

Professor Nick Bostrom
Director & James Martin Research Fellow
Nick Bostrom is Professor in the Faculty of Philosophy at Oxford University and founding Director of the Future of Humanity Institute and of the Programme on the Impacts of Future Technology within the Oxford Martin School. He is the author of some 200 publications, including Anthropic Bias (Routledge, 2002), Global Catastrophic Risks (ed., OUP, 2008), and Human Enhancement (ed., OUP, 2009), and a forthcoming book on Superintelligence. He previously taught at Yale, and he was a Postdoctoral Fellow of the British Academy. Bostrom has a background in physics, computational neuroscience, and mathematical logic as well as philosophy.
He is best known for his work in five areas: (i) the concept of existential risk; (ii) the simulation argument; (iii) anthropics (developing the first mathematically explicit theory of observation selection effects); (iv) transhumanism, including related issues in bioethics and on consequences of future technologies; and (v) foundations and practical implications of consequentialism. He is currently working on a book on the possibility of an intelligence explosion and on the existential risks and strategic issues related to the prospect of machine superintelligence.
In 2009, he was awarded the Eugene R. Gannon Award (one person selected annually worldwide from the fields of philosophy, mathematics, the arts and other humanities, and the natural sciences). He has been listed in the FP 100 Global Thinkers list, the Foreign Policy Magazine's list of the world's top 100 minds. His writings have been translated into more than 21 languages, and there have been some 80 translations or reprints of his works. He has done more than 500 interviews for TV, film, radio, and print media, and he has addressed academic and popular audiences around the world.

CV: http://www.nickbostrom.com/cv.pdf
Personal Web: http://www.nickbostrom.com/
http://www.fhi.ox.ac.uk/our_staff/research/nick_bostrom

Many thanks for watching!
- Support me via Patreon: https://www.patreon.com/scifuture
- Please Subscribe to this Channel: http://youtube.com/subscription_center?add_user=TheRationalFuture
- Science, Technology & the Future website: http://scifuture.org
...

Nick Bostrom - The SuperIntelligence Control Problem - Oxford Winter Intelligence

June 28, 2013 5:09 pm

http://www.fhi.ox.ac.uk Winter Intelligence Oxford - AGI12 - http://agi-conference.org/2012 ==The SuperIntelligence Control Problem==

http://nickbostrom.com

Book about superintelligence is inching towards completion. (Since the final stages of a book project always take longer than one thinks they will take, I try not to think about how long they will take.) Finished a co-authored paper on unilateralism. Some recent media coverage of our work: although positive, still struck by how whatever one actually says to the media is almost always reported as one of a small number of availalbe clichés. But there was an unusually good feature article by Ross Andersen in Aeon magazine.
http://www.aeonmagazine.com/world-views/ross-andersen-human-extinction/
...

Nick Bostrom - Simulations - Three Possibilities

January 25, 2013 5:47 am

http://www.simulation-argument.com/
The simulation argument is continuing to attract a great deal of attention. I regret that I cannot usually respond to individual queries about the argument.

http://www.simulation-argument.com/simulation.html
ABSTRACT. This paper argues that at least one of the following propositions is true: (1) the human species is very likely to go extinct before reaching a "posthuman" stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof); (3) we are almost certainly living in a computer simulation. It follows that the belief that there is a significant chance that we will one day become posthumans who run ancestor-simulations is false, unless we are currently living in a simulation. A number of other consequences of this result are also discussed.
...

Mindscape 111 | Nick Bostrom on Anthropic Selection and Living in a Simulation

August 24, 2020 5:01 pm

Blog post with audio player, show notes, and transcript: https://www.preposterousuniverse.com/podcast/2020/08/24/111-nick-bostrom-on-anthropic-selection-and-living-in-a-simulation/

Patreon: https://www.patreon.com/seanmcarroll

Mindscape Podcast playlist: https://www.youtube.com/playlist?list=PLrxfgDEc2NxY_fRExpDXr87tzRbPCaA5x

Human civilization is only a few thousand years old (depending on how we count). So if civilization will ultimately last for millions of years, it could be considered surprising that we’ve found ourselves so early in history. Should we therefore predict that human civilization will probably disappear within a few thousand years? This “Doomsday Argument” shares a family resemblance to ideas used by many professional cosmologists to judge whether a model of the universe is natural or not. Philosopher Nick Bostrom is the world’s expert on these kinds of anthropic arguments. We talk through them, leading to the biggest doozy of them all: the idea that our perceived reality might be a computer simulation being run by enormously more powerful beings.

Nick Bostrom received his Ph.D. in philosophy from the London School of Economics. He also has bachelor’s degrees in philosophy, mathematics, logic, and artificial intelligence from the University of Gothenburg, an M.A. in philosophy and physics from the University of Stockholm, and an M.Sc. in computational neuroscience from King’s College London. He is currently a Professor of Applied Ethics at the University of Oxford, Director of the Oxford Future of Humanity Institute, and Director of the Oxford Martin Programme on the Impacts of Future Technology. He is the author of Anthropic Bias: Selection Effects in Science and Philosophy and Superintelligence: Paths, Dangers, Strategies.
...

Nick Bostrom | Life and Meaning in an AI Utopia

March 29, 2024 5:00 pm

What would life look like in a fully automated world? How would we derive meaning in a world of superintelligence?

Today's Win-Win episode is all about utopias, dystopias and thought experiments, because I'm talking to Professor Nick Bostrom. Nick is one of the world’s leading philosophers - he's a leading thinker on the nature of consciousness, AI, catastrophic risks, cosmology… he’s also the guy behind the Simulation Hypothesis, the Paperclip Maximizer thought experiment, the seminal AI book Superintelligence, …he even inspired this video of mine!
https://www.youtube.com/watch?v=tq9XMr3GDUw

Thanks to Igor for joining me in this one, give him a follow at: https://twitter.com/igorkurganov?lang=en

Off into the hypotheti-sphere we go…

Chapters
0:00 - Intro
01:42 - Why shift focus to Utopia?
03:31 - Different types of Utopias
11:40 - How to find purpose in a solved world?
18:31 - Potential Limits to Technology
22:34 - How would Utopians approach Competition?
30:24 - Superintelligence
34:39 - Vulnerable World Hypothesis
39:48 - Thinking in Superpositions
41:24 - Solutions to the Vulnerable World?
46:34 - Aligning Markets to Defensive Tech
48:43 - Digital Minds & Uploading
52:25 - Consciousness & AI
55:08 - Outro

Links:
Nick’s Website - https://nickbostrom.com/
Anthropic Bias Paper - https://anthropic-principle.com/
Deep Utopia Book - https://nickbostrom.com/booklink/deep-utopia
Superintelligence book - Superintelligence: Paths, Dangers, Strategies
Vulnerable World Hypothesis - https://nickbostrom.com/papers/vulnerable.pdf
Orthogonality Thesis - https://nickbostrom.com/superintelligentwill.pdf
Simulation Argument - https://simulation-argument.com/
Digital Minds - https://nickbostrom.com/papers/interests-of-digital-minds.pdf
Future of Humanity Institute - https://www.fhi.ox.ac.uk/

The Win-Win Podcast:
Poker champion Liv Boeree takes to the interview chair to tease apart the complexities of one of the most fundamental parts of human nature: competition. Liv is joined by top philosophers, gamers, artists, technologists, CEOs, scientists, athletes and more to understand how competition manifests in their world, and how to change seemingly win-lose games into Win-Wins.

Watch the previous episode with TED founder Chris Anderson here: https://www.youtube.com/watch?v=FhviOzgqQ84

Credits
♾️  Hosted by Liv Boeree & Igor Kurganov
♾️  Produced & Edited by Raymond Wei
♾️  Audio Mix by Keir Schmidt
...

Nick Bostrom: Superintelligence, Posthumanity, and AI Utopia | Robinson's Podcast #205

April 28, 2024 5:20 pm

Patreon: https://bit.ly/3v8OhY7

Nick Bostrom is a Swedish philosopher who was most recently Professor at Oxford University, where he served as the founding Director of the Future of Humanity Institute. He is best known for his book Superintelligence (Oxford, 2014), which covers the dangers of artificial intelligence. In this episode, Robinson and Nick discuss his more recent book, Deep Utopia: Life and Meaning in a Solved World (Ideapress, 2024). More particularly, they discuss the alignment problem with artificial intelligence, the problem of utopia, how artificial intelligence—if it doesn’t make our world horrible—could make it wonderful, the future of technology, and how humans might adjust to a life without work.

Nick’s Website: ⁠https://nickbostrom.com⁠

Deep Utopia: https://a.co/d/b8eHuhQ

OUTLINE
00:00 Introduction
02:50 From AI Dystopia to AI Utopia 
9:15 On Superintelligence and the Alignment Problem
17:48 The Problem of Utopia
28:04 AI and the Purpose of Mathematics
38:59 What Technologies Can We Expect in an AI Utopia?
43:59 Philosophical Problems with Immortality
55:14 Are There Advanced Alien Civilizations Out There?
59:54 Why Don’t We Live in Utopia?

Robinson’s Website: ⁠http://robinsonerhardt.com⁠

Robinson Erhardt researches symbolic logic and the foundations of mathematics at Stanford University. Join him in conversations with philosophers, scientists, weightlifters, artists, and everyone in-between.
...

Deceiving AI Might Backfire On Us - Nick Bostrom

September 1, 2024 3:10 pm

Get all sides of every story and be better informed at https://ground.news/AlexOC - subscribe for 40% off unlimited access.

For early, ad-free access to videos, support the channel at https://www.alexoconnor.com

To donate to my PayPal (thank you): http://www.paypal.me/cosmicskeptic

- VIDEO NOTES

Nick Bostrom is a philosopher known for his work on existential risk, the anthropic principle, human enhancement ethics, whole brain emulation, and superintelligence risks. His recent book, Deep Utopia, explores what might happen if we get AI development right.

- LINKS

By Deep Utopia (affiliate link): https://amzn.to/4g1lyrn

- TIMESTAMPS

0:00 Are you optimistic or pessimistic about AI?
4:05 What is the biggest threat from AI?
7:37 What are the biggest benefits AI might bring?
12:26 What happens to meaning in an automated world?
30:45 What does life look like in the AI utopia?
39:27 Will AI become our victim?
45:28 What can we do to prevent AI dystopia?
56:07 What does conscious AI look like?

- CONNECT

My Website: https://www.alexoconnor.com

SOCIAL LINKS:

Twitter: http://www.twitter.com/cosmicskeptic
Facebook: http://www.facebook.com/cosmicskeptic
Instagram: http://www.instagram.com/cosmicskeptic
TikTok: @CosmicSkeptic

The Within Reason Podcast: https://podcasts.apple.com/gb/podcast/within-reason/id1458675168

- CONTACT

Business email: [email protected]

Or send me something:

Alex O'Connor
Po Box 1610
OXFORD
OX4 9LL
ENGLAND

------------------------------------------
...

Nick Bostrom: Simulation and Superintelligence | Lex Fridman Podcast #83

March 26, 2020 2:27 am

Nick Bostrom is a philosopher at University of Oxford and the director of the Future of Humanity Institute. He has worked on fascinating and important ideas in existential risks, simulation hypothesis, human enhancement ethics, and the risks of superintelligent AI systems, including in his book Superintelligence. I can see talking to Nick multiple times on this podcast, many hours each time, but we have to start somewhere.

Support this podcast by signing up with these sponsors:
- ExpressVPN at https://www.expressvpn.com/lexpod
- MasterClass: https://masterclass.com/lex
- Cash App - use code "LexPodcast" and download:
- Cash App (App Store): https://apple.co/2sPrUHe
- Cash App (Google Play): https://bit.ly/2MlvP5w

EPISODE LINKS:
Nick's website: https://nickbostrom.com/
Future of Humanity Institute:
- https://twitter.com/fhioxford
- https://www.fhi.ox.ac.uk/
Books:
- Superintelligence: https://amzn.to/2JckX83
Wikipedia:
- https://en.wikipedia.org/wiki/Simulation_hypothesis
- https://en.wikipedia.org/wiki/Principle_of_indifference
- https://en.wikipedia.org/wiki/Doomsday_argument
- https://en.wikipedia.org/wiki/Global_catastrophic_risk

PODCAST INFO:
Podcast website:
https://lexfridman.com/podcast
Apple Podcasts:
https://apple.co/2lwqZIr
Spotify:
https://spoti.fi/2nEwCF8
RSS:
https://lexfridman.com/feed/podcast/
Full episodes playlist:
https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4
Clips playlist:
https://www.youtube.com/playlist?list=PLrAXtmErZgOeciFP3CBCIEElOJeitOr41

OUTLINE:
0:00 - Introduction
2:48 - Simulation hypothesis and simulation argument
12:17 - Technologically mature civilizations
15:30 - Case 1: if something kills all possible civilizations
19:08 - Case 2: if we lose interest in creating simulations
22:03 - Consciousness
26:27 - Immersive worlds
28:50 - Experience machine
41:10 - Intelligence and consciousness
48:58 - Weighing probabilities of the simulation argument
1:01:43 - Elaborating on Joe Rogan conversation
1:05:53 - Doomsday argument and anthropic reasoning
1:23:02 - Elon Musk
1:25:26 - What's outside the simulation?
1:29:52 - Superintelligence
1:47:27 - AGI utopia
1:52:41 - Meaning of life

CONNECT:
- Subscribe to this YouTube channel
- Twitter: https://twitter.com/lexfridman
- LinkedIn: https://www.linkedin.com/in/lexfridman
- Facebook: https://www.facebook.com/LexFridmanPage
- Instagram: https://www.instagram.com/lexfridman
- Medium: https://medium.com/@lexfridman
- Support on Patreon: https://www.patreon.com/lexfridman
...

Interviews and Talks

Industry Leaders and Notable Public Figures

Explainers

Learn about the issue by some of the best explainers out there

Lethal Intelligence Microblog

Blow your mind with the latest stories

Receive important updates!

Your email will not be shared with anyone and won’t be used for any reason besides notifying you when we have important updates or new content

×