Eliezer Yudkowsky

The original AI alignment person, his work on the prospect of a runaway intelligence explosion and existential risk from AI has been extremely influential and practically gave breath to the whole AI Safety Motion.
He has been thinking about this long and hard and has been key to building the Alignment theory framework since at least as early as 2004, when AI was hardly on anyone’s mind.

Eliezer S. Yudkowsky is an American artificial intelligence researcher and writer on decision theory and ethics, best known for popularising ideas related to artificial intelligence alignment.

He is the founder of and a research fellow at the Machine Intelligence Research Institute (MIRI), a private research nonprofit based in Berkeley, California. He also founded the popular LessWrong community originally as a sister site and offshoot of Robin Hanson’s Overcoming Bias . LessWrong is dedicated to the study of human rationality, the general field of decision making and pulls together a variety of material from mathematics, economics, cognitive science, and other disciplines relevant when considering how individuals or groups act in complex environments rationally.

People often draw him as a pessimist, but in fact he is an ultra-optimist, who was originally a major pro-AI enthusiast and wrote beautiful pieces about the Singularity, until at some point, his optimism about Technological Progress momentum drove him to the conclusion that the Artificial Intelligence will soon blow past the humanity fast, to the point where the difference will be bigger than that of humans and cockroaches. He is actually much more of an optimist than most accelerationists out there. What makes him a doomer is the fact that he has thought long and hard about the values-alignment problem and seeing the trend, he estimates we will reach God-Level AI before we know how to configure it to care about us.
Like most “AI Doomers”, he is a techno-optimist who only advocates for a pause, for just enough time, so that we can change the order of events.
If only we could get alignment before capabilities, we would unlock the closest thing we can have to paradise.

You don’t get to just plan how to use it, it is planning how to use itself!

Will Superintelligent AI End the World? | Eliezer Yudkowsky | TED

July 11, 2023 7:20 pm

Decision theorist Eliezer Yudkowsky has a simple message: superintelligent AI could probably kill us all. So the question becomes: Is it possible to build powerful artificial minds that are obedient, even benevolent? In a fiery talk, Yudkowsky explores why we need to act immediately to ensure smarter-than-human AI systems don't lead to our extinction.

If you love watching TED Talks like this one, become a TED Member to support our mission of spreading ideas: https://ted.com/membership

Follow TED!
Twitter: https://twitter.com/TEDTalks
Instagram: https://www.instagram.com/ted
Facebook: https://facebook.com/TED
LinkedIn: https://www.linkedin.com/company/ted-conferences
TikTok: https://www.tiktok.com/@tedtoks

The TED Talks channel features talks, performances and original series from the world's leading thinkers and doers. Subscribe to our channel for videos on Technology, Entertainment and Design — plus science, business, global issues, the arts and more. Visit https://TED.com to get our entire library of TED Talks, transcripts, translations, personalized talk recommendations and more.

Watch more: https://go.ted.com/eliezeryudkowsky

https://youtu.be/Yd0yQ9yxSYY

TED's videos may be used for non-commercial purposes under a Creative Commons License, Attribution–Non Commercial–No Derivatives (or the CC BY – NC – ND 4.0 International) and in accordance with our TED Talks Usage Policy: https://www.ted.com/about/our-organization/our-policies-terms/ted-talks-usage-policy. For more information on using TED for commercial purposes (e.g. employee learning, in a film or online course), please submit a Media Request at https://media-requests.ted.com

#TED #TEDTalks #ai
...

Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization | Lex Fridman Podcast #368

March 30, 2023 5:13 pm

Eliezer Yudkowsky is a researcher, writer, and philosopher on the topic of superintelligent AI. Please support this podcast by checking out our sponsors:
- Linode: https://linode.com/lex to get $100 free credit
- House of Macadamias: https://houseofmacadamias.com/lex and use code LEX to get 20% off your first order
- InsideTracker: https://insidetracker.com/lex to get 20% off

EPISODE LINKS:
Eliezer's Twitter: https://twitter.com/ESYudkowsky
LessWrong Blog: https://lesswrong.com
Eliezer's Blog page: https://www.lesswrong.com/users/eliezer_yudkowsky
Books and resources mentioned:
1. AGI Ruin (blog post): https://lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities
2. Adaptation and Natural Selection: https://amzn.to/40F5gfa

PODCAST INFO:
Podcast website: https://lexfridman.com/podcast
Apple Podcasts: https://apple.co/2lwqZIr
Spotify: https://spoti.fi/2nEwCF8
RSS: https://lexfridman.com/feed/podcast/
Full episodes playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4
Clips playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOeciFP3CBCIEElOJeitOr41

OUTLINE:
0:00 - Introduction
0:43 - GPT-4
23:23 - Open sourcing GPT-4
39:41 - Defining AGI
47:38 - AGI alignment
1:30:30 - How AGI may kill us
2:22:51 - Superintelligence
2:30:03 - Evolution
2:36:33 - Consciousness
2:47:04 - Aliens
2:52:35 - AGI Timeline
3:00:35 - Ego
3:06:27 - Advice for young people
3:11:45 - Mortality
3:13:26 - Love

SOCIAL:
- Twitter: https://twitter.com/lexfridman
- LinkedIn: https://www.linkedin.com/in/lexfridman
- Facebook: https://www.facebook.com/lexfridman
- Instagram: https://www.instagram.com/lexfridman
- Medium: https://medium.com/@lexfridman
- Reddit: https://reddit.com/r/lexfridman
- Support on Patreon: https://www.patreon.com/lexfridman
...

The Power of Intelligence - An Essay By Eliezer Yudkowsky

March 11, 2023 5:59 pm

The Power of Intelligence is an essay published by Eliezer Yudkowsky in 2007.

Now, a few points:

Sorting Pebbles Into Correct Heaps was about the orthogonality thesis. A consequence of the orthogonality thesis is that powerful artificial intelligence will not necessarily share human values.

This video is about just how powerful and dangerous intelligence is. These two insights put together are a cause for concern.

If humanity doesn't solve the problem of aligning AIs to human values, there's a high chance we'll not survive the creation of artificial general intelligence. This issue is known as "The Alignment Problem". Some of you may be familiar with the paperclips scenario: an AGI created to maximize the number of paperclips uses up all the resources on Earth, and eventually outer space, to produce paperclips. Humanity dies early in this process. But, given the current state of research, even a simple goal such as “maximize paperclips” is already too difficult for us to program reliably into an AI. We simply don't know how to aim AIs reliably at goals. If tomorrow a paperclip company manages to program a superintelligence, that superintelligence likely won't maximize paperclips. We have no idea what it would do. It would be an alien mind pursuing alien goals. Knowing this, solving the alignment problem for human values in general, with all their complexity, appears like truly a daunting task. But we must rise to the challenge, or things could go very wrong for us.

You can read The Power of Intelligence and many other essays by Eliezer Yudkowsky on this website: https://www.readthesequences.com/

▀▀▀▀▀▀▀▀▀PATREON, MEMBERSHIP, KO-FI▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀

🟠 Patreon: https://www.patreon.com/rationalanimations

🟢Merch: https://crowdmade.com/collections/rational-animations

🔵 Channel membership: https://www.youtube.com/channel/UCgqt1RE0k0MIr0LoyJRy2lg/join

🟤 Ko-fi, for one-time and recurring donations: https://ko-fi.com/rationalanimationss

▀▀▀▀▀▀▀▀▀SOCIAL & DISCORD▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀

Twitter: https://twitter.com/RationalAnimat1

Discord: https://discord.gg/hxWBm6sBNU

▀▀▀▀▀▀▀▀▀OTHER STUFF▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀

Share this video with a friend or on Social Media: https://youtu.be/q9Figerh89g

Playlist with all the animated videos: https://www.youtube.com/watch?v=GgyX-MnRAuY&list=PL1Nr7ps7wyYo-0AOYd6lfKp-6Czh4p5On&index=2

Bitcoin address: 1FX4iepZfh1yuMNYtvYf2CWL7gha8cakuf
Ethereum address: 0xDa8463494Dd233c3aBe59bc42Abc4D50823A5f3

▀▀▀▀▀▀▀▀▀PATRONS & MEMBERS▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀

Name
Colin Ricardo
Long Hoang
Tor Barstad
Stuart Alldritt
AGI275
Serena Thompson
Atm Lun
Ville Ikäläinen
Chris Painter
Fermain
Juan Benet
James
Dylan Mavrides
Michael McGuffin
DJ Peach Cobbler
Lex Song
Falcon Scientist
Trevaugn Martin
Please Insert Name
Christian Loomis
Ðœikhail Samin
AudD
Tomarty
Edward Yu
Chase
Ahmed Elsayyad
Chad M Jones
Jim Bob
Emmanuel Fredenrich
Neal Strobl
Renn Bright
bparro
Danealor
Dain Bramage
Travis Reid
Craig Falls
Diego
Aaron Camacho
Vincent Weisser
Shivanshu Purohit
Alex Hall
Hex Ramdass
Yarrow Bouchard
Ivan Bakhtin
Vincent Söderberg
joe39504589
Tim Davis
Oisin Hendrix
indexhtml
Craig Talbert
Klemen Slavic
Filip Passeri
Nick Sharp
hr101
Udo
Scott Alexander
Thomas Farago
noggieB
Dawson
Florian
Logic
Daniel Cunningham
YouAyePee
John S
Dang Griffith
Gabriel Ledung
William Deng
David Mc
Meade Marmalade
Ana Tomasovic
JHeb
Jeroen De Dauw
Craig Ludington
Jacob Van Buren
Gabriel Fair
Superslowmojoe
Lars Osborne
Nicholas Kees Dupuis
Ashten The Platypus
Michael Zimmermann
Austin Cluff
Nathan Fish
Sephiths
Ryouta Takehiko
Nathan
Kevin Ma
Bleys Goodson
FusionOak
Ducky
Treviisolion
Bryan Egan
Perry Jackson
Calvin McCarter
Matt Parlmer
Neel Nanda
Tim Duffy
Connor
Marty Betz
Robin Hanson
rictic
Mark Gongloff
Roborodger
ReddChronic
Matthew Brooks
marverati
Luke Freeman
Raphaël Lévy
Rochelle Harris
Dan Wahl
Francisco Lillo
AWyattLife
Evan Carroll
codeadict
Lazy Scholar
Torstein Haldorsen
Alex G
Supreme Reader
Michał Zieliński
The CEO
רם רינגל

▀▀▀▀▀▀▀CREDITS▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀

Animation Director: https://twitter.com/vezanmatics/Evan

Animation team:
- https://twitter.com/celestialshibe / Lara
- https://twitter.com/DamonEdgson/ Damon
- https://www.instagram.com/earl.gravy / Grey
- https://twitter.com/mrgabreleiros / Gabriel
- https://www.instagram.com/hannah_luloo / Hannah
- https://www.kristysteffens.com / Kristy
- https://twitter.com/Nezhahah / Neda
- https://www.instagram.com/deewangart / Dee
- ?? / :3

Writer:
- Eliezer Yudkowsky

Producer:
- :3

Narrator: Robert Miles
VO Editor: Tony Dipiazza

Sound design and music: Epic Mountain
...

Eliezer Yudkowsky - Why AI Will Kill Us, Aligning LLMs, Nature of Intelligence, SciFi, & Rationality

April 6, 2023 3:57 pm

For 4 hours, I tried to come up reasons for why AI might not kill us all, and Eliezer Yudkowsky explained why I was wrong.

We also discuss his call to halt AI, why LLMs make alignment harder, what it would take to save humanity, his millions of words of sci-fi, and much more.

If you want to get to the crux of the conversation, fast forward to 2:35:00 through 3:43:54. Here we go through and debate the main reasons I still think doom is unlikely.

Transcript: https://dwarkeshpatel.com/p/eliezer-yudkowsky
Apple Podcasts: https://apple.co/3mcPjON
Spotify: https://spoti.fi/3KDFzX9

Follow me on Twitter: https://twitter.com/dwarkesh_sp

Timestamps:
(0:00:00) - TIME article
(0:09:06) - Are humans aligned?
(0:37:35) - Large language models
(1:07:15) - Can AIs help with alignment?
(1:30:17) - Society’s response to AI
(1:44:42) - Predictions (or lack thereof)
(1:56:55) - Being Eliezer
(2:13:06) - Othogonality
(2:35:00) - Could alignment be easier than we think?
(3:02:15) - What will AIs want?
(3:43:54) - Writing fiction & whether rationality helps you win
...

159 - We’re All Gonna Die with Eliezer Yudkowsky

February 20, 2023 1:30 pm

Eliezer Yudkowsky is an author, founder, and leading thinker in the AI space.

------
✨ DEBRIEF | Unpacking the episode:
https://shows.banklesshq.com/p/debrief-eliezer

------
✨ COLLECTIBLES | Collect this episode:
https://collectibles.bankless.com/mint

------
We wanted to do an episode on AI… and we went deep down the rabbit hole. As we went down, we discussed ChatGPT and the new generation of AI, digital superintelligence, the end of humanity, and if there’s anything we can do to survive.

This conversation with Eliezer Yudkowsky sent us into an existential crisis, with the primary claim that we are on the cusp of developing AI that will destroy humanity.

Be warned before diving into this episode, dear listener. Once you dive in, there’s no going back.

------
📣 MetaMask Learn | Learn Web3 with the Leading Web3 Wallet https://bankless.cc/

------
🚀 JOIN BANKLESS PREMIUM:
https://www.bankless.com/join

------
BANKLESS SPONSOR TOOLS:

🐙KRAKEN | MOST-TRUSTED CRYPTO EXCHANGE
https://bankless.cc/kraken

🦄UNISWAP | ON-CHAIN MARKETPLACE
https://bankless.cc/uniswap

⚖️ ARBITRUM | SCALING ETHEREUM
https://bankless.cc/Arbitrum

👻 PHANTOM | FRIENDLY MULTICHAIN WALLET
https://bankless.cc/phantom-waitlist

------
Topics Covered

0:00 Intro
10:00 ChatGPT
16:30 AGI
21:00 More Efficient than You
24:45 Modeling Intelligence
32:50 AI Alignment
36:55 Benevolent AI
46:00 AI Goals
49:10 Consensus
55:45 God Mode and Aliens
1:03:15 Good Outcomes
1:08:00 Ryan’s Childhood Questions
1:18:00 Orders of Magnitude
1:23:15 Trying to Resist
1:30:45 Miri and Education
1:34:00 How Long Do We Have?
1:38:15 Bearish Hope
1:43:50 The End Goal

------
Resources:

Eliezer Yudkowsky
https://twitter.com/ESYudkowsky

MIRI
https://intelligence.org/

Reply to Francois Chollet
https://intelligence.org/2017/12/06/chollet/

Grabby Aliens
https://grabbyaliens.com/

-----
Not financial or tax advice. This channel is strictly educational and is not investment advice or a solicitation to buy or sell any assets or to make any financial decisions. This video is not tax advice. Talk to your accountant. Do your own research.

Disclosure. From time-to-time I may add links in this newsletter to products I use. I may receive commission if you make a purchase through one of these links. Additionally, the Bankless writers hold crypto assets. See our investment disclosures here:
https://www.bankless.com/disclosures
...

AI will kill all of us | Eliezer Yudkowsky interview

July 2, 2023 9:00 pm

-- Eliezer Yudkowsky, Founder and Senior Research Fellow of the Machine Intelligence Research Institute, joins David to discuss artificial intelligence, machine learning, and much more
---
Become a Member: https://www.davidpakman.com/membership
Become a Patron: https://www.patreon.com/davidpakmanshow
Book David Pakman: https://www.cameo.com/davidpakman
---
Subscribe to Pakman Live: https://www.youtube.com/@pakmanlive
The David Pakman Show in Spanish: https://www.youtube.com/channel/UCWiouD4y2vb8jBeV9Xui_5w
David on Instagram: http://www.instagram.com/david.pakman
David on Bluesky: https://bsky.app/profile/davidpakman.bsky.social
TDPS Subreddit: http://www.reddit.com/r/thedavidpakmanshow/
Pakman Discord: https://www.davidpakman.com/discord
Facebook: http://www.facebook.com/davidpakmanshow
Leave a Voicemail Line: (219)-2DAVIDP
---
David's tech:
- Camera: Sony PXW-X70 https://amzn.to/3emv1v1
- Microphone: Shure SM7B: https://amzn.to/3hEVtSH
- Voice Processor: dbx 266xs https://amzn.to/3B1SV8N
- Stream Controller: Elgato Stream Deck https://amzn.to/3B4jPNq
- Microphone Cloudlifter: https://amzn.to/2T9bhne

-Timely news is important! We upload new clips every day! Make sure to subscribe!

Broadcast on June 29, 2023

#davidpakmanshow #eliezeryudkowsky #artificialintelligence
...

Eliezer Yudkowsky – AI Alignment: Why It's Hard, and Where to Start

December 28, 2016 11:43 pm

On May 5, 2016, Eliezer Yudkowsky gave a talk at Stanford University for the 26th Annual Symbolic Systems Distinguished Speaker series (https://symsys.stanford.edu/viewing/event/26580).

Eliezer is a senior research fellow at the Machine Intelligence Research Institute, a research nonprofit studying the mathematical underpinnings of intelligent behavior.

Talk details—including slides, notes, and additional resources—are available at https://intelligence.org/stanford-talk/.

UPDATES/CORRECTIONS:

1:05:53 - Correction Dec. 2016: FairBot cooperates iff it proves that you cooperate with it.

1:08:19 - Update Dec. 2016: Stuart Russell is now the head of a new alignment research institute, the Center for Human-Compatible AI (http://humancompatible.ai/).

1:08:38 - Correction Dec. 2016: Leverhulme CFI is a joint venture between Cambridge, Oxford, Imperial College London, and UC Berkeley. The Leverhulme Trust provided CFI's initial funding, in response to a proposal developed by CSER staff.

1:09:04 - Update Dec 2016: Paul Christiano now works at OpenAI (as does Dario Amodei). Chris Olah is based at Google Brain.
...

Live: Eliezer Yudkowsky - Is Artificial General Intelligence too Dangerous to Build?

April 20, 2023 11:10 am

Live from the Center for Future Mind and the Gruber Sandbox at Florida Atlantic University, Join us for an interactive Q&A with Yudkowsky about Al Safety!

Eliezer Yudkowsky discusses his rationale for ceasing the development of Als more sophisticated than GPT-4 Dr. Mark Bailey of National Intelligence University will moderate the discussion.

An open letter published on March 22, 2023 calls for "all Al labs to immediately pause for at least 6 months the training of Al systems more powerful than GPT-4." In response, Yudkowsky argues that this proposal does not do enough to protect us from the risks of losing control of superintelligentAl.

Eliezer Yudkowsky is a decision theorist from the U.S. and leads research at the Machine Intelligence Research Institute. He's been working on aligning Artificial General Intelligence since 2001 and is widely regarded as a founder of the field of alignment.
Dr. Mark Bailev is the Chair of the Cvber Intelligence and Data Science Department, as well as the Co-Director of the Data Science Intelligence Center, at the National Intelligence University.
...

Who Would Win the AI Arms Race? | AI IRL

July 12, 2023 6:00 pm

Bloomberg's Nate Lanxon and Jackie Davalos are joined by controversial AI researcher Eliezer Yudkowksy to discuss the danger posed by misaligned AI. Yudkowksy contends AI is a grave threat to civilization, that there's a desperate need for international cooperation to crack down on bad actors and that the chance humanity survives AI is slim.

--------
Like this video? Subscribe: http://www.youtube.com/Bloomberg?sub_confirmation=1
Become a Quicktake Member for exclusive perks: http://www.youtube.com/bloomberg/join

Bloomberg Originals offers bold takes for curious minds on today’s biggest topics. Hosted by experts covering stories you haven’t seen and viewpoints you haven’t heard, you’ll discover cinematic, data-led shows that investigate the intersection of business and culture. Exploring every angle of climate change, technology, finance, sports and beyond, Bloomberg Originals is business as you’ve never seen it.

Subscribe for business news, but not as you've known it: exclusive interviews, fascinating profiles, data-driven analysis, and the latest in tech innovation from around the world.

Visit our partner channel Bloomberg Quicktake for global news and insight in an instant.
...

Can We Stop the AI Apocalypse? | Eliezer Yudkowsky

July 14, 2023 2:02 am

Artificial Intelligence (AI) researcher Eliezer Yudkowsky makes the case for why we should view AI as an existential threat to humanity. Rep. Crenshaw gets into the basics of AI and how the new AI program, GPT-4, is a revolutionary leap forward in the tech. Eliezer hypothesizes the most likely scenarios if AI becomes self-aware and unconstrained – from rogue programs that blackmail targets to self-replicating nano robots. They discuss building global coalitions to rein in AI development and how China views AI. And they explore first steps Congress could take to limit AI’s capabilities for harm while still enabling its promising advances in research and development.

Eliezer Yudkowsky is a co-founder and research fellow at the Machine Intelligence Research Institute, a private research nonprofit based in Berkeley, California. Follow him on Twitter @ESYudkowsky
...

Eliezer Yudkowsky on the Dangers of AI 5/8/23

May 8, 2023 5:30 pm

Eliezer Yudkowsky insists that once artificial intelligence becomes smarter than people, everyone on earth will die. Listen as Yudkokwsky speaks with EconTalk's Russ Roberts on why we should be very, very afraid and why we’re not prepared or able to manage the terrifying risks of AI.
Links, transcript, and more information: https://www.econtalk.org/eliezer-yudkowsky-on-the-dangers-of-ai/
Subscribe to EconTalk on YouTube: https://www.youtube.com/channel/UCKAu-mTq7iZtKHK6QiBqKPA?sub_confirmation=1
Subscribe to the audio episodes:
Apple Podcasts: https://podcasts.apple.com/us/podcast/econtalk/id135066958
Stitcher: https://www.stitcher.com/podcast/econtalk
Spotify: https://open.spotify.com/show/4M5Gb71lskQ0Rg6e08uQhi
and wherever you listen to podcasts.
...

Will AI Destroy Us? - AI Virtual Roundtable

July 28, 2023 10:00 pm

Today's episode is a roundtable discussion about AI safety with Eliezer Yudkowsky, Gary Marcus, and Scott Aaronson. Eliezer Yudkowsky is a prominent AI researcher and writer known for co-founding the Machine Intelligence Research Institute, where he spearheaded research on AI safety. He's also widely recognized for his influential writings on the topic of rationality. Scott Aaronson is a theoretical computer scientist and author, celebrated for his pioneering work in the field of quantum computation. He's also the chair of COMSI at U of T Austin, but is currently taking a leave of absence to work at OpenAI. Gary Marcus is a cognitive scientist, author, and entrepreneur known for his work at the intersection of psychology, linguistics, and AI. He's also authored several books, including "Kluge" and "Rebooting AI: Building Artificial Intelligence We Can Trust".

This episode is all about AI safety. We talk about the alignment problem. We talk about the possibility of human extinction due to AI. We talk about what intelligence actually is. We talk about the notion of a singularity or an AI takeoff event and much more.

It was really great to get these three guys in the same virtual room and I think you'll find that this conversation brings something a bit fresh to a topic that has admittedly been beaten to death on certain corners of the internet.

Pre-order my book:
"The End of Race Politics: Arguments for a Colorblind America" - https://bit.ly/48VUw17

FOLLOW COLEMAN:
Check out my Album: AMOR FATI - https://bit.ly//AmorFatiAlbum
Substack - https://colemanhughes.substack.com
Join the Unfiltered Community - https://bit.ly/3B1GAlS
YouTube - http://bit.ly/38kzium
Twitter - http://bit.ly/2rbAJue
Facebook - http://bit.ly/2LiAXH3
Instagram - http://bit.ly/2SDGo6o
Podcast -https://bit.ly/3oQvNUL
Website - https://colemanhughes.org

Chapters:
00:00:00 Intro
00:03:45 The Uncertainty Of Chat GPT's Potential Threats
00:05:50 The Need To Understand And Align Machine Values
00:09:01 What Does AI Want In The Future?
00:14:44 Universal Threat Of Super Intelligence: A Global Concern
00:17:13 Inadequacy Of Bombing Data Centers And The Pace Of Technological Advancements
00:20:48 Current Machines Lack General Intelligence
00:25:46 Leveraging Ai As A Partner For Complex Tasks
00:29:46 Improving Gp T's Knowledge Gap: From GPT3 To GPT4
00:32:00 The Unseen Brilliance Of Artificial Intelligence
00:37:27 Introducing A Continuum Spectrum Of Artificial General Intelligence
00:39:54 The Possibility Of Smarter Future Ai: Surprising Or Expected?
00:42:19 The Importance Of Superintelligence's Intentions And Potential Threat To Humanity
00:47:20 The Evolution Of Optimism And Cynicism In Science
00:52:17 The Importance Of Getting It Right The First Time
00:53:53 Concerns Over Artificial Intelligence And Its Potential Threat To Humanity
00:57:39 Importance Of Global Coordination For Addressing Concerns About Super Intelligence
00:59:04 Exploring The Potential Of Super Intelligent Ai For Human Happiness
01:03:32 The Potential Of AI To Solve Humanity's Problems
01:05:45 The Uncertain Impact Of Gp T Four
01:08:30 The Future Of Utility And The Dangers Ahead
01:15:04 The Challenge Of Internalized Constraints And Jailbreaking
01:19:04 The Need For Diverse Approaches In Alignment Theory
01:23:47 The Importance Of Legible Warning Bills And Capability Evaluations
01:26:31 Exploring Hypotheses And Constraints For Robot Behavior
01:27:44 Lack Of Will And Obsession With Ll Ms Hinders Progress In Street Light Installation
01:33:20 The Challenges Of Developing Knowledge About The Alignment Problem

#ConversationswithColeman #CWC #ColemanHughes #Podcast #Politics #society #Colemanunfiltered #Unfiltered #Music #Philosophy #BlackCulture #Intellectual #podcasting #podcastersofinstagram #Youtube #podcastlife #music #youtube #radio #comedy #podcastshow #spotifypodcast #newpodcast #interview #motivation #art #covid #history #republicans #blacklivesmatter #follow #libertarian #art #socialism #communism #democracy #woke #wokepolitics #media #chatgpt #AI #EliezerYudkowsky #GaryMarcus #ScottAaronson
...

Sam Harris and Eliezer Yudkowsky - The A.I. in a Box thought experiment

February 9, 2018 10:56 pm

The AI-box experiment is an informal experiment devised by Eliezer Yudkowsky
For those who still have doubts regarding the dangers of artificial intelligence, listen to this!
If you want more on this experiment: https://en.wikipedia.org/wiki/AI_box

Full Podcast: https://samharris.org/podcasts/116-ai-racing-toward-brink

If you want to support Sam Harris: https://samharris.org/subscribe



FAIR USE NOTICE: This Video may contain copyrighted (©) material the use of which has not always been specifically authorized by the copyright owner. Such material is made available to advance understanding of ecological, political, human rights, economic, democracy, scientific, moral, ethical, and social justice issues, etc. It is believed that this constitutes a 'fair use' of any such copyrighted material as provided for in section 107 of the US Copyright Law. In accordance with Title 17 U.S.C. Section 107, this material is distributed without profit to those who have expressed a prior general interest in receiving similar information for research and educational purposes.



I curate content (edit little snippets of wisdom from the original material) and by giving it an original take and a coherent narrative, it adds value to the original content, plus makes it know to new people.
...

George Hotz vs Eliezer Yudkowsky AI Safety (FULL DEBATE)

August 16, 2023 3:43 am

George Hotz vs Eliezer Yudkowsky AI Safety Debate (FULL DEBATE) ...

Discussion / debate with AI expert Eliezer Yudkowsky

May 4, 2023 6:03 pm

An interesting discussion / debate with Eliezer Yudkowsky on whether AI will end humanity. For some, this may be fascinating, frustrating, frightening, spur more curiosity, I have no idea. I feel like we each tried our best to make our case, even if we got lost in the weeds a few times. There's definitely food for thought here either way. Also, I screwed up and the chat text ended up being too tiny, sorry about that.

https://accursedfarms.com
...

Sam Harris 2018 - IS vs OUGHT, Robots of The Future Might Deceive Us with Eliezer Yudkowsky

April 17, 2018 5:47 pm

Sam Harris 2018 - IS vs OUGHT, Robots of The Future Might Deceive Us with Eliezer Yudkowsky

Subscribe Channels To Watch Latest The Thinking Atheist : https://goo.gl/evjZNC

Visit the website to hear the latest podcast : https://samharris.org/
...

Eliezer Yudkowsky on if Humanity can Survive AI

May 6, 2023 8:20 pm

Eliezer Yudkowsky is a researcher, writer, and advocate for artificial intelligence safety. He is best known for his writings on rationality, cognitive biases, and the development of superintelligence. Yudkowsky has written extensively on the topic of AI safety and has advocated for the development of AI systems that are aligned with human values and interests. Yudkowsky is the co-founder of the Machine Intelligence Research Institute (MIRI), a non-profit organization dedicated to researching the development of safe and beneficial artificial intelligence. He is also a co-founder of the Center for Applied Rationality (CFAR), a non-profit organization focused on teaching rational thinking skills. He is also a frequent author at LessWrong.com as well as Rationality: From AI to Zombies.

In this episode, we discuss Eliezer’s concerns with artificial intelligence and his recent conclusion that it will inevitably lead to our demise. He’s a brilliant mind, an interesting person, and genuinely believes all of the stuff he says. So I wanted to have a conversation with him to hear where he is coming from, how he got there, understand AI better, and hopefully help us bridge the divide between the people that think we’re headed off a cliff and the people that think it’s not a big deal.

(0:00) Intro
(1:18) Welcome Eliezer
(6:27) How would you define artificial intelligence?
(15:50) What is the purpose of a firm alarm?
(19:29) Eliezer’s background
(29:28) The Singularity Institute for Artificial Intelligence
(33:38) Maybe AI doesn’t end up automatically doing the right thing
(45:42) AI Safety Conference
(51:15) Disaster Monkeys
(1:02:15) Fast takeoff
(1:10:29) Loss function
(1:15:48) Protein folding
(1:24:55) The deadly stuff
(1:46:41) Why is it inevitable?
(1:54:27) Can’t we let tech develop AI and then fix the problems?
(2:02:56) What were the big jumps between GPT3 and GPT4?
(2:07:15) “The trajectory of AI is inevitable”
(2:28:05) Elon Musk and OpenAI
(2:37:41) Sam Altman Interview
(2:50:38) The most optimistic path to us surviving
(3:04:46) Why would anything super intelligent pursue ending humanity?
(3:14:08) What role do VCs play in this?

Show Notes:
https://twitter.com/liron/status/1647443778524037121?s=20
https://futureoflife.org/event/ai-safety-conference-in-puerto-rico/
https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/miri-announces-new-death-with-dignity-strategy
https://www.youtube.com/watch?v=q9Figerh89g
https://www.vox.com/the-highlight/23447596/artificial-intelligence-agi-openai-gpt3-existential-risk-human-extinction
Eliezer Yudkowsky – AI Alignment: Why It's Hard, and Where to Start

Mixed and edited: Justin Hrabovsky
Produced: Rashad Assir
Executive Producer: Josh Machiz
Music: Griff Lawson

🎙 Listen to the show
Apple Podcasts: https://podcasts.apple.com/us/podcast/three-cartoon-avatars/id1606770839
Spotify: https://open.spotify.com/show/5WqBqDb4br3LlyVrdqOYYb?si=3076e6c1b5c94d63&nd=1
Google Podcasts: https://podcasts.google.com/feed/aHR0cHM6Ly9mZWVkcy5zaW1wbGVjYXN0LmNvbS9zb0hJZkhWbg

🎥 Subscribe on YouTube: https://www.youtube.com/channel/UCugS0jD5IAdoqzjaNYzns7w?sub_confirmation=1

Follow on Socials
📸 Instagram - https://www.instagram.com/theloganbartlettshow
🐦 Twitter - https://twitter.com/loganbartshow
🎬 Clips on TikTok - https://www.tiktok.com/@theloganbartlettshow

About the Show
Logan Bartlett is a Software Investor at Redpoint Ventures - a Silicon Valley-based VC with $6B AUM and investments in Snowflake, DraftKings, Twilio, and Netflix. In each episode, Logan goes behind the scenes with world-class entrepreneurs and investors. If you're interested in the real inside baseball of tech, entrepreneurship, and start-up investing, tune in every Friday for new episodes.
...

Eliezer Yudkowsky on "Open Problems in Friendly Artificial Intelligence" at Singularity Summit 2011

October 24, 2011 7:58 am

The Singularity Summit 2011 was a TED-style two-day event at the historic 92nd Street Y in New York City. The next event will take place in San Francisco, on October 13 & 14, 2012. For more information, visit:
http://www.singularitysummit.com
...

Connor Leahy & Eliezer Yudkowsky - Japan AI Alignment Conference 2023

March 24, 2023 12:31 pm

Q&A on AI Alignment by Connor Leahy and Eliezer Yudkowsky.

Recorded March 11 at the 2023 Japan AI Alignment Conference, organized by Conjecture and ARAYA.

https://jac2023.ai/
...

“There is no Hope!” - Eliezer Yudkowsky on AI

November 30, 2023 9:53 am

AI leading thinker Eliezer Yudkowsky has alarming takes.

🎧 Listen to the full Podcast
https://www.youtube.com/watch?v=gA1sNLL6yg4

------
🚀 Join the Bankless Nation
https://bankless.cc/JoinToday

🎙️ Bankless on other Platforms
https://pod.link/1499409058

🐦 Follow Us
https://x.com/BanklessHQ

------
Not financial or tax advice. See our investment disclosures here:
https://www.bankless.com/disclosures⁠
...

Eliezer Yudkowsky on AI doom & how to stop illegal boats – The Week in 60 Minutes | SpectatorTV

July 13, 2023 7:00 pm

James Heale is joined by Tom Hunt MP and Tim Farron MP to debate the illegal migration bill. Also on the show, will AI kill us all? Eliezer Yudkowsky and James Phillips discuss; Katy Balls and Stephen Bush look at Labour’s future relationship with the trade unions; Louise Perry on Britain’s addiction to plastic surgery and Alice Hoxton on Britain’s love for gossip.

00:00 Welcome from James Heale
01:47 How to stop the boats? With Tom Hunt MP and Tim Farron MP
19:03 Will AI kill us? With Eliezer Yudkowsky & James Phillip
33:46 Will Starmer win over the unions? With Katy Balls & Stephen Bush
45:41 Britain's plastic surgery addiction. With Louise Perry
57:55 Why do Britons love to gossip? With Alice Loxton

// SUBSCRIBE TO THE SPECTATOR
Get 12 issues for £12, plus a free £20 Amazon voucher
https://www.spectator.co.uk/tvoffer

// FOLLOW US
https://www.twitter.com/spectator
https://www.facebook.com/OfficialSpectator
https://www.linkedin.com/company/the-spectator
https://www.instagram.com/spectator1828
https://www.tiktok.com/@thespectatormagazine

Theme song written and performed by Jon Barker © 2020 Jonathan Stewart Barker
Publisher Jonathan Stewart Barker 100%, administered by prsformusic.com
Recording © 2020 Jonathan Stewart Barker 100%, administered by ppl.com
...

Catastrophe Scenario for AI

February 22, 2023 9:00 pm

What does the catastrophe scenario look like for AI? Eliezer Yudkowsky shares his vision for what that may look like #shorts

------
📣 Osmosis | Your Gateway into the Cosmos Ecosystem
www.osmosis.zone/bankless

------
🚀 JOIN BANKLESS PREMIUM:
https://www.bankless.com/join

------
BANKLESS SPONSOR TOOLS:

🐙KRAKEN | MOST-TRUSTED CRYPTO EXCHANGE
https://bankless.cc/kraken

🦄UNISWAP | ON-CHAIN MARKETPLACE
https://bankless.cc/uniswap

⚖️ ARBITRUM | SCALING ETHEREUM
https://bankless.cc/Arbitrum

🚁 EARNIFI | CLAIM YOUR UNCLAIMED AIRDROPS
https://bankless.cc/earnifi

-----
Not financial or tax advice. This channel is strictly educational and is not investment advice or a solicitation to buy or sell any assets or to make any financial decisions. This video is not tax advice. Talk to your accountant. Do your own research.

Disclosure. From time-to-time I may add links in this newsletter to products I use. I may receive commission if you make a purchase through one of these links. Additionally, the Bankless writers hold crypto assets. See our investment disclosures here:
https://www.bankless.com/disclosures
...

Should we shut down AI? | Critic Eliezer Yudkowsky goes head to head with AI researcher Joscha Bach

July 27, 2024 4:00 pm

Should we see AI as opening up an era of ground-breaking innovation, or does it foreshadow the loss of vital human attributes and independence?


AI safety researcher Eliezer Yudkowsky battles cognitive scientist Joscha Bach in this excerpt pulled from our recent debate, 'AI and the end of humanity'. This debate took place during July's IAI Live - our monthly online event hosting the world's biggest speakers as they confront the most pressing topics. The next IAI Live is on Monday the 5th of August. Get your tickets now! https://iai.tv/live?utm_source=YouTube&utm_medium=description&utm_campaign=head-to-head

Watch the full debate at https://iai.tv/video/ai-and-the-end-of-humanity?utm_source=YouTube&utm_medium=description&utm_campaign=head-to-head

From sophisticated robots to the development of new drugs, artificial Intelligence is shaping our future in many sectors. Once we thought the creation of original text and forms of art were the select preserve of human beings. But the exponential development of AI is challenging these assumptions. An increasing number claim that AI threatens the very idea of what it is to be human. A recent survey of more than 900 technology pioneers and policy leaders predicted AI would threaten human autonomy and agency, with over a third saying we would be worse off in the future. Confirmed, they argue, by Musk's proposal that we should link our brains directly to machines.

Does AI fundamentally challenge what it means to be human? Or is all the talk of its radical importance a sign that we have been taken in by the marketing hype of an enormously profitable industry, and humans not only remain very much in control but will continue to do so?

#AI #ArtificialIntelligence #WillAIEndHumanity

Eliezer Yudkowsky is a leading AI safety researcher, renowned for popularizing the concept of friendly AI.

Joscha Bach is a cognitive scientist who is pushing the limits of what we can achieve with Artificial Intelligence.

The Institute of Art and Ideas features videos and articles from cutting edge thinkers discussing the ideas that are shaping the world, from metaphysics to string theory, technology to democracy, aesthetics to genetics. Subscribe today! https://iai.tv/subscribe?utm_source=YouTube&utm_medium=description&utm_campaign=ai-and-the-end-of-humanity

For debates and talks: https://iai.tv
For articles: https://iai.tv/articles
For courses: https://iai.tv/iai-academy/courses
...

Interviews and Talks

Industry Leaders and Notable Public Figures

Explainers

Learn about the issue by some of the best explainers out there

Lethal Intelligence Microblog

Blow your mind with the latest stories

Favorite Microbloggers

Receive important updates!

Your email will not be shared with anyone and won’t be used for any reason besides notifying you when we have important updates or new content

×