John Sherman

John Sherman is an awarded reporter, CEO and family-man (father of two), who led a meaningful and happy life until his world suddenly turned upside down, in April 2023, when he got introduced to the heart-stopping realisation of where upcoming frontier AI is taking our little planet, with breakneck speed, a place where nothing we currently value can be recognised anymore. Since that moment he has decided to dedicate a huge part of his professional bandwidth on raising awareness to the general public.

His journalism has been recognised multiple times. A Peabody Award winner and a “True Detective” in corporate environmental destruction legal cases. He has an instinct for seeing the patterns, the race dynamics that lead to narrow short-term capital gains while leaving everyone worse off, the darkness, the enterprise bulls*t.

Awards he has won include: Peabody Award, Alfred I. duPont-Columbia Award, National Emmy Award, National Edward R. Murrow Award, Regional Emmy Awards, Addy Awards, Telly Awards, BBJ 40 under 40, Baltimore Magazine Best of Baltimore and Baltimorean of the Year. (He was also featured as a question on the TV show Jeopardy in 2007 😜)

In October 2023, he started his awesome podcast, weekly releasing new interviews, unique in the sense that its target group includes everyone, all the next-door common people, anyone this unprecedented disruption will be affecting, not just intellectual elites. His growing audience includes people from all ages, all walks of life and his guests in the show include professors, silicon valley dudes, but also moms, artists and grandpas.

Make sure you subscribe to it

"Pause AI or Die" For Humanity: An AI Safety Podcast Episode #14, Joep Meindertsma Interview

February 7, 2024 2:14 pm

In Episode #14, John interviews Joep Meinderstma, Founder of Pause AI, a global AI safety policy and protest organization. Pause AI was behind the first ever AI Safety protests on the planet.

John and Joep talk about what's being done, how it all feels, how it all might end, and even broach the darkest corner of all of this: suffering risk. This conversation embodies a spirit this movement needs: we can be upbeat and positive as we talk about the darkest subjects possible. It's not "optimism" to race to build suicide machines, but it is optimism to assume the best, and to believe we can and must succeed no matter what the odds.

This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.

For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.

Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.



Resources:

https://pauseai.info/

https://discord.gg/pVMWjddaW7

Sample Letter to Elected Leaders:

Dear XXXX-

I'm a constituent of yours, I have lived in your district for X years. I'm writing today because I am gravely concerned about the existential threat to humanity from Artificial Intelligence. It is the most important issue in human history, nothing else is close.

Have you read the 22-word statement from the Center for AI Safety on 5/31/23 that Sam Altman and all the big AI CEOs signed? It reads: "Mitigating the risk of extinction from AI should be a global priority alongside other societal scale risks such as pandemics and nuclear war."

Do you believe them? If so, what are you doing to prevent human extinction? If not, why don't you believe them?

Most prominent AI safety researchers say the default outcome, if we do not make major changes right now, is that AI will kill every living thing on earth, within 1-50 years. This is not science fiction or hyperbole. This is our current status quo.

It's like a pharma company saying they have a drug they say can cure all diseases, but it hasn't been through any clinical trials and it may also kill anyone who takes it. Then, with no oversight or regulation, they have put the new drug in the public water supply.

Big AI is making tech they openly admit they cannot control, do not understand how it works, and could kill us all. Their resources are 99:1 on making the tech stronger and faster, not safer. And yet they move forward, daily, with no oversight or regulation.

I am asking you to become a leader in AI safety. Many policy ideas could help, and you could help them become law. Things like liability reform so AI companies are liable for harm, hard caps on compute power, and tracking and reporting of all chip locations at a certain level.

I'd like to discuss this with you or someone from your office over the phone or a Zoom. Would that be possible?

Thanks very much.

XXXXXX
Address
Phone
...

Episode #26 - “Pause AI Or We All Die” Holly Elmore Interview, For Humanity: An AI Safety Podcast

May 1, 2024 3:23 pm

Please Donate Here To Help Promote This Show
https://www.paypal.com/paypalme/forhumanitypodcast

FULL INTERVIEW STARTS AT (00:09:55)

In episode #26, host John Sherman and Pause AI US Founder Holly Elmore talk about AI risk. They discuss how AI surprised everyone by advancing so fast, what it’s like for employees at OpenAI working on safety, and why it’s so hard for people to imagine what they can’t imagine.

This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.

For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.

Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

TIMESTAMPS:
**Progress in Artificial Intelligence (00:00:00)**
Discussion about the rapid progress in AI, its impact on AI safety, and revisiting assumptions.

**Introduction to AI Safety Podcast (00:00:49)**
Introduction to the "For Humanity and AI Safety" podcast, its focus on human extinction threat from AI, and revising AI risk percentages.

**Need for Compute Cap Regulations (00:04:16)**
Discussion about the need for laws to cap compute power used by big AI companies, ethical implications, and the appointment of Paul Christiano to a new AI safety governmental agency.

**Personal Journey into AI Risk Awareness (00:15:26)**
Holly Elmore's personal journey into AI risk awareness, understanding AI risk, humility, and the importance of recognizing unexperienced events' potential impact.

**The Overton Window Shift and Imagination Limitation (00:22:05)**
Discussion on societal reactions to dramatic changes and the challenges of imagining the potential impact of artificial intelligence.

**OpenAI's Approach to AI Safety (00:25:53)**
Discussion on OpenAI's strategy for creating AI, the mindset at OpenAI, and the internal dynamics within the AI safety community.

**The History and Evolution of AI Safety Community (00:41:37)**
Discusses the origins and changes in the AI safety community, engaging the public, and ethical considerations in AI safety decision-making.

**Impact of Technology on Social Change (00:51:47)**
Explores differing perspectives on the role of technology in driving social change, perception of technology, and progress.

**Challenges and Opportunities in AI Adoption (01:02:42)**
Explores the possibility of a third way in AI adoption, the effectiveness of protests, and concerns about AI safety.

Resources:

Azeer Azar+Connor Leahy Podcast
Debating the existential risk of AI, with Connor Leahy

Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes

JOIN THE FIGHT, help Pause AI!!!!
Pause AI

Join the Pause AI Weekly Discord Thursdays at 3pm EST
/ discord
https://discord.com/invite/pVMWjddaW7

22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
https://www.safe.ai/work/statement-on-ai-risk
...

Episode #24 - “YOU can help save the world from AI Doom” For Humanity: An AI Safety Podcast

April 17, 2024 2:41 pm

TRAILER (00:00:00-00-:05:20)
FULL INTERVIEW STARTS AT (00:08:05)

Episode #24 - “YOU can help save the world from AI Doom” For Humanity: An AI Safety Podcast

In episode #24, host John Sherman and Nonlinear Co-founder Kat Woods discusses the critical need for prioritizing AI safety in the face of developing superintelligent AI. In this conversation, Kat and John discuss the topic of AI safety and the potential risks associated with artificial superintelligence. Kat shares her personal transformation from being a skeptic to becoming an advocate for AI safety. They explore the idea that AI could pose a near-term threat rather than just a long-term concern. They also discuss the importance of prioritizing AI safety over other philanthropic endeavors and the need for talented individuals to work on this issue. Kat highlights potential ways in which AI could harm humanity, such as creating super viruses or starting a nuclear war. They address common misconceptions, including the belief that AI will need humans or that it will be human-like.

Overall, the conversation emphasizes the urgency of addressing AI safety and the need for greater awareness and action. They also discuss the importance of funding AI safety research and the need for better regulation. The conversation ends on a hopeful note, with the speakers expressing optimism about the growing awareness and concern regarding AI safety.

This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.

For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.

Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

TIMESTAMPS:

AI Safety Urgency (00:00:00) Emphasizing the immediate need to focus on AI safety.
Superintelligent AI World (00:00:50) Considering the impact of AI smarter than humans.
AI Safety Charities (00:02:37) The necessity for more AI safety-focused charities.
Personal AI Safety Advocacy Journey (00:10:10) Kat Woods' transformation into an AI safety advocate.
AI Risk Work Encouragement (00:16:03) Urging skilled individuals to tackle AI risks.
AI Safety's Global Impact (00:17:06) AI safety's pivotal role in global challenges.
AI Safety Prioritization Struggles (00:18:02) The difficulty of making AI safety a priority.
Wealthy Individuals and AI Safety (00:19:55) Challenges for the wealthy in focusing on AI safety.
Superintelligent AI Threats (00:23:12) Potential global dangers posed by superintelligent AI.
Limits of Imagining Superintelligent AI (00:28:02) The struggle to fully grasp superintelligent AI's capabilities.
AI Containment Risks (00:32:19) The problem of effectively containing AI.
AI's Human-Like Risks (00:33:53) Risks of AI with human-like qualities.
AI Dangers (00:34:20) Potential ethical and safety risks of AI.
Nonlinear's Role in AI Safety (00:39:41) Nonlinear's contributions to AI safety work.
AI Safety Donations (00:41:53) Guidance on supporting AI safety financially.
Diverse AI Safety Recruitment (00:45:23) The need for varied expertise in AI safety.
AI Safety Rebranding (00:47:09) Proposing "AI risk" for clearer communication.
Effective Altruism and AI Safety (00:49:43) The relationship between effective altruism and AI safety.
AI Safety Complexity (00:52:12) The intricate nature of AI safety issues.
AGI Curiosity and Control (00:52:34) The balance of AGI's curiosity and human oversight.
AI Superintelligence Urgency (00:53:52) The critical timing and power of AI superintelligence.
AI Safety Work Perception (00:56:06) Changing the image of AI safety efforts.
AI Safety and Government Regulation (00:59:23) The potential for regulatory influence on AI safety.
Entertainment's AI Safety Role (01:04:24) How entertainment can promote AI safety awareness.
AI Safety Awareness Progress (01:05:37) Growing recognition and response to AI safety.
AI Safety Advocacy Funding (01:08:06) The importance of financial support for AI safety advocacy.
Effective Altruists and Rationalists Views (01:10:22) The stance of effective altruists and rationalists on AI safety.
AI Risk Marketing (01:11:46) The case for using marketing to highlight AI risks.

RESOURCES:

Nonlinear: https://www.nonlinear.org/

Best Account on Twitter: AI Notkilleveryoneism Memes

JOIN THE FIGHT, help Pause AI!!!!
Pause AI

Join the Pause AI Weekly Discord Thursdays at 3pm EST
/ discord

22 Word Statement from Center for AI Safety
Statement on AI Risk | CAISco
...

Episode #29 - “Drop Everything To Stop AGI” For Humanity: An AI Safety Podcast

May 22, 2024 4:10 pm

FULL INTERVIEW STARTS AT (00:07:37)

Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast

The world is waking up to the existential danger of unaligned AGI. But we are racing against time. Some heroes are stepping up, people like this week’s guest Chris Gerrby. Chris was successful in organizing people against AI in Sweden. In early May he left Sweden, moved to England, and is now spending 14 hours a day 7 days a week to stop AGI. Learn how he plans to grow Pause AI as its new Chief Growth Officer and his thoughts on how to make the case for pausing AI.

This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.

For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.

Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

Timestamps:



RESOURCES:

JOIN THE FIGHT, help Pause AI!!!!
Pause AI

Join the Pause AI Weekly Discord Thursdays at 2pm EST
/ discord
https://discord.com/invite/pVMWjddaW7

22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
https://www.safe.ai/work/statement-on-ai-risk

Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes
...

Episode #44: “AI P-Doom Debate: 50% vs 99.999%” For Humanity: An AI Risk Podcast

September 4, 2024 3:06 pm

In Episode #44, host John Sherman brings back friends of For Humanity Dr. Roman Yamopolskiy and Liron Shapira. Roman is an influential AI safety researcher, thought leader, and Associate Professor at the University of Louisville. Liron is a tech CEO and host of the excellent Doom Debates podcast. Roman famously holds a 99.999% p-doom, Liron has a nuanced 50%. John starts out at 75%, unrelated to their numbers. Where are you? Did Roman or Liron move you in their direction at all? Let us know in the comments!

LEARN HOW TO HELP RAISE AI RISK AWARENESS IN YOUR COMMUNITY HERE
https://pauseai.info/local-organizing

Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast

EMAIL JOHN: [email protected]

This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.

For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.

Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

RESOURCES:

BUY ROMAN’S NEW BOOK ON AMAZON
https://a.co/d/fPG6lOB

SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
https://www.youtube.com/@DoomDebates

JOIN THE FIGHT, help Pause AI!!!!
Pause AI

Join the Pause AI Weekly Discord Thursdays at 2pm EST
/ discord
https://discord.com/invite/pVMWjddaW7

Max Winga’s “A Stark Warning About Extinction”
https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22

For Humanity Theme Music by Josef Ebner
Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg
Website: https://josef.pictures

BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom

22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
https://www.safe.ai/work/statement-on-ai-risk

Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes
...

Episode #40 “Surviving Doom” For Humanity: An AI Risk Podcast

August 7, 2024 4:08 pm

In Episode #40, host John Sherman talks with James Norris, CEO of Upgradable and longtime AI safety proponent. James has been concerned about AI x-risk for 26 years. He lives now in Bali and has become an expert in prepping for a very different world post-warning shot or other major AI-related disaster, and he’s helping others do the same. James shares his powerful insight, long-time awareness, and expertise helping others find a way to survive and rebuild from a post-AGI disaster warning shot.

FULL INTERVIEW STARTS AT **00:04:47**

Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast

EMAIL JOHN: [email protected]

This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.

For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.

Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

RESOURCES:

Max Winga’s “A Stark Warning About Extinction”
https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22

For Humanity Theme Music by Josef Ebner
Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg
Website: https://josef.pictures

SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
https://www.youtube.com/@DoomDebates

BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom

JOIN THE FIGHT, help Pause AI!!!!
Pause AI

Join the Pause AI Weekly Discord Thursdays at 2pm EST
/ discord
https://discord.com/invite/pVMWjddaW7

22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
https://www.safe.ai/work/statement-on-ai-risk

Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes

Timestamps
### **Relevance to AGI (00:05:05)**
### **Nuclear Threats and Survival (00:05:34)**
### **Introduction to the Podcast (00:06:18)**
### **Open Source AI Discussion (00:09:28)**
### **James's Background and Location (00:11:00)**
### **Prepping and Quality of Life (00:13:12)**
### **Creating Spaces for Preparation (00:13:48)**
### **Survival Odds and Nuclear Risks (00:21:12)**
### **Long-Term Considerations (00:22:59)**
### **The Warning Shot Discussion (00:24:21)**
### **The Need for Preparation (00:27:38)**
### **Planning for Population Centers (00:28:46)**
### **Likelihood of Extinction (00:29:24)**
### **Basic Preparedness Steps (00:30:04)**
### **Natural Disaster Preparedness (00:32:15)**
### **Timeline for Change (00:32:58)**
### **Predictions for AI Breakthroughs (00:34:08)**
### **Human Nature and Future Risks (00:37:06)**
### **Societal Influences on Behavior (00:40:00)**
### **Living Off-Grid (00:43:04)**
### **Conformity Bias in Humanity (00:46:38)**
### **Planting Seeds of Change (00:48:01)**
### **The Evolution of Human Reasoning (00:48:22)**
### **Looking Back to 1998 (00:48:52)**
### **Emergency Preparedness Work (00:52:19)**
### **The Shift to Effective Altruism (00:53:22)**
### **The AI Safety Movement (00:54:24)**
### **The Challenge of Public Awareness (00:55:40)**
### **The Historical Context of AI Discussions (00:57:01)**
### **The Role of Effective Altruism (00:58:11)**
### **Barriers to Knowledge Spread (00:59:22)**
### **The Future of AI Risk Advocacy (01:01:17)**
### **Shifts in Mindset Over 26 Years (01:03:27)**
### **The Impact of Youthful Optimism (01:04:37)**
### **Disillusionment with Altruism (01:05:37)**
### **Short Timelines and Urgency (01:07:48)**
### **Human Nature and AI Development (01:08:49)**
### **The Risks of AI Leadership (01:09:16)**
### **Public Reaction to AI Risks (01:10:22)**
### **Consequences for AI Researchers (01:11:18)**
### **Contradictions of Abundance (01:11:42)**
### **Personal Safety in a Risky World (01:12:40)**
### **Assassination Risks for Powerful Figures (01:13:41)**
### **Future Governance Challenges (01:14:44)**
### **Distribution of AI Benefits (01:16:12)**
### **Ethics and AI Development (01:18:11)**
### **Moral Obligations to Non-Humans (01:19:02)**
### **Utopian Futures and AI (01:21:16)**
### **Varied Human Values (01:22:29)**
### **International Cooperation on AI (01:27:57)**
### **Hope Amidst Uncertainty (01:31:14)**
### **Resilience in Crisis (01:31:32)**
### **Building Safe Zones (01:32:18)**
### **Urgency for Action (01:33:06)**
### **Doomsday Prepping Reflections (01:33:56)**
### **Celebration of Life (01:35:07)**
...

Episode #35 “The AI Risk Investigators: Inside Gladstone AI Part 1” For Humanity:An AI Risk Podcast

July 3, 2024 2:28 pm

In Episode #35 host John Sherman talks with Jeremie and Edouard Harris, CEO and CTO of Gladstone AI. Gladstone AI is the private company working most closely with the US government on assessing AI risk. The Gladstone Report, published in February, was the first public acknowledgment of AI risk reality by the US government in any way. These are two very important people doing incredibly important work. The full interview lasts more than 2 hours and will be broken into two shows.

Gladstone AI Action Plan
https://www.gladstone.ai/action-plan

TIME MAGAZINE ON THE GLADSTONE REPORT
https://time.com/6898967/ai-extinction-national-security-risks-report/

SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
https://www.youtube.com/@DoomDebates

Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast

This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.

For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.

Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

For Humanity Theme Music by Josef Ebner
Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg
Website: https://josef.pictures

RESOURCES:

BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom

JOIN THE FIGHT, help Pause AI!!!!
Pause AI

Join the Pause AI Weekly Discord Thursdays at 2pm EST
/ discord
https://discord.com/invite/pVMWjddaW7

22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
https://www.safe.ai/work/statement-on-ai-risk

Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes

TIMESTAMPS:

Sincerity and Sam Altman (00:00:00) Discussion on the perceived sincerity of Sam Altman and his actions, including insights into his character and motivations.

Introduction to Gladstone AI (00:01:14) Introduction to Gladstone AI, its involvement with the US government on AI risk, and the purpose of the podcast episode.

Doom Debates on YouTube (00:02:17)

YC Experience and Sincerity in Startups (00:08:13) Insight into the Y Combinator (YC) experience and the emphasis on sincerity in startups, with personal experiences and observations shared.

OpenAI and Sincerity (00:11:51) Exploration of sincerity in relation to OpenAI, including evaluations of the company's mission, actions, and the challenges it faces in the AI landscape.

The scaling story (00:21:33) Discussion of the scaling story related to AI capabilities and the impact of increasing data, processing power, and training models.

The call about GPT-3 (00:22:29) Edward Harris receiving a call about the scaling story and the significance of GPT-3's capabilities, leading to a decision to focus on AI development.

Transition from Y Combinator (00:24:42) Jeremy and Edward Harris leaving their previous company and transitioning from Y Combinator to focus on AI development.

Security concerns and exfiltration (00:31:35) Discussion about the security vulnerabilities and potential exfiltration of AI models from top labs, highlighting the inadequacy of security measures.

Government intervention and security (00:38:18) Exploration of the potential for government involvement in providing security assets

Resource reallocation for safety and security (00:40:03) Discussion about the need to reallocate resources for safety, security, and alignment technology

OpenAI's computational resource allocation (00:42:10) Concerns about OpenAI's failure to allocate computational resources for safety

China's Strategic Moves (00:43:07) Discussion on potential aggressive actions by China to prevent a permanent disadvantage in AI technology.

China's Sincerity in AI Safety (00:44:29) Debate on the sincerity of China's commitment to AI safety and the influence of the CCP.

Taiwan Semiconductor Manufacturing Company (TSMC) (00:47:47)

US and China's Power Constraints (00:51:30) Comparison of the constraints faced by the US and China in terms of advanced chips and grid power.

Nuclear Power and Renewable Energy (00:52:23) Discussion on the power sources being pursued by China

Future Scenarios (00:56:20) Exploration of potential outcomes if China overtakes the US in AI technology.
...

Episode #36 “The AI Risk Investigators: Inside Gladstone AI, Part 2” For Humanity Podcast

July 10, 2024 4:49 pm

In Episode #36, host John Sherman talks with Jeremie and Edouard Harris, CEO and CTO of Gladstone AI. Gladstone AI is the private company working most closely with the US government on assessing AI risk. The Gladstone Report, published in February, was the first public acknowledgment of AI risk reality by the US government in any way. These are two very important people doing incredibly important work. The full interview lasts more than 2 hours and will be broken into two shows, this the second of the two.

Gladstone AI Action Plan
https://www.gladstone.ai/action-plan

TIME MAGAZINE ON THE GLADSTONE REPORT
https://time.com/6898967/ai-extinction-national-security-risks-report/

SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
https://www.youtube.com/@DoomDebates

Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast

This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.

For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.

Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

For Humanity Theme Music by Josef Ebner
Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg
Website: https://josef.pictures

RESOURCES:

BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom

JOIN THE FIGHT, help Pause AI!!!!
Pause AI

Join the Pause AI Weekly Discord Thursdays at 2pm EST
/ discord
https://discord.com/invite/pVMWjddaW7

22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
https://www.safe.ai/work/statement-on-ai-risk

Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes

TIMESTAMPS:

**The whistleblower's concerns (00:00:00)**

**Introduction to the podcast (00:01:09)**

**The urgency of addressing AI risk (00:02:18)**

**The potential consequences of falling behind in AI (00:04:36)**

**Transitioning to working on AI risk (00:06:33)**

**Engagement with the State Department (00:08:07)**

**Project assessment and public visibility (00:10:10)**

**Motivation for taking on the detective work (00:13:16)**

**Alignment with the government's safety culture (00:17:03)**

**Potential government oversight of AI labs (00:20:50)**

**The whistle blowers' concerns (00:21:52)**

**Shifting control to the government (00:22:47)**

**Elite group within the government (00:24:12)**

**Government competence and allocation of resources (00:25:34)**

**Political level and tech expertise (00:27:58)**

**Challenges in government engagement (00:29:41)**

**State department's engagement and assessment (00:31:33)**

**Recognition of government competence (00:34:36)**

**Engagement with frontier labs (00:35:04)**

**Whistleblower insights and concerns (00:37:33)**

**Whistleblower motivations (00:41:58)**

**Engagements with AI Labs (00:42:54)**

**Emotional Impact of the Work (00:43:49)**

**Workshop with Government Officials (00:44:46)**

**Challenges in Policy Implementation (00:45:46)**

**Expertise and Insights (00:49:11)**

**Future Engagement with US Government (00:50:51)**

**Flexibility of Private Sector Entity (00:52:57)**

**Impact on Whistleblowing Culture (00:55:23)**

**Key Recommendations (00:57:03)**

**Security and Governance of AI Technology (01:00:11)**

**Obstacles and Timing in Hardware Development (01:04:26)**

**The AI Lab Security Measures (01:04:50)**

**Nvidia's Stance on Regulations (01:05:44)**

**Export Controls and Governance Failures (01:07:26)**

**Concerns about AGI and Alignment (01:13:16)**

**Implications for Future Generations (01:16:33)**

**Personal Transformation and Mental Health (01:19:23)**

**Starting a Nonprofit for AI Risk Awareness (01:21:51)**
...

What Is The Origin Of AI Safety? | AI Safety Movement | Episode #48

October 2, 2024 3:27 pm

In Episode #48, host John Sherman talks with Pause AI US Founder Holly Elmore about the limiting origins of the AI safety movement. Polls show 60-80% of the public are opposed to building artificial superintelligence. So why is the movement to stop it still so small? The roots of the AI safety movement have a lot to do with it. Holly and John explore the present day issues created by the movements origins.

Let's build community! Live For Humanity Zoom Community Meeting via Zoom Thursdays at 8:30pm EST...explanation during the full show! USE THIS THINK: https://storyfarm.zoom.us/j/88987072403 PASSCODE: 789742

LEARN HOW TO HELP RAISE AI RISK AWARENESS IN YOUR COMMUNITY HERE
https://pauseai.info/local-organizing

Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast

EMAIL JOHN: [email protected]

This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.

For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.

Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

RESOURCES:

JOIN THE FIGHT, help Pause AI!!!!
Pause AI

SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
https://www.youtube.com/@DoomDebates

Join the Pause AI Weekly Discord Thursdays at 2pm EST
/ discord
https://discord.com/invite/pVMWjddaW7

Max Winga’s “A Stark Warning About Extinction”
https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22

For Humanity Theme Music by Josef Ebner
Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg
Website: https://josef.pictures

BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom

22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
https://www.safe.ai/work/statement-on-ai-risk

Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes

*************************

Welcome! In today's video, we delve into the vital aspects of AI safety movement and explore what is the origin of AI safety.

This video covers what is the origin of AI safety and the following topics:

- AI safety
- AI safety research
- Eliezer’s insights on AI safety research

********************

Discover more of our video content on what is the origin of AI safety. You'll find additional insights on this topic along with relevant social media links.

YouTube: https://www.youtube.com/@ForHumanityPodcast
Website: http://www.storyfarm.com/

***************************

This video explores what is the origin of AI safety, AI safety, AI safety research, and Eliezer’s insights on AI safety research.

Have I addressed your curiosity regarding what is the origin of AI safety?

We eagerly await your feedback and insights. Please drop a comment below, sharing your thoughts, queries, or suggestions about: AI safety, AI safety research, Eliezer’s insights on AI safety research, and what is the origin of AI safety.
...

Please Look Up: For Humanity, An AI Safety Podcast Episode #1

October 30, 2023 3:37 pm

For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required.

Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

This is episode #1. Thank you for watching. I'm just a dad with a small business who doesn't want everyone to die. I hope you find the content accessible and informative. Any and all feedback is more than welcome in the comments, I'd like to make this interactive.

In March 2023 following the release of Chat GPT4, I came across an article in Time Magazine on line that changed my perspective on everything. I’m an optimist to my core, and a lover to technology. But the one article changed my outlook on the future more than anything I thought was remotely possible. The Time article was written by Eleizer Yukdowsky, a universally respected AI safety leader and researcher, who has been working on AI safety for more than 20 years.

He wrote:
"Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in “maybe possibly some remote chance,” but as in “that is the obvious thing that would happen.”

LITERALLY EVERYONE ON EARTH WILL DIE. That's what he wrote.

I read the article a dozen times. The journalist in me tried to poke holes. I couldn’t fine one.
I’m an optimist who loves to have fun thought no way this could be real. LITERALLY EVERYONE ON EARTH WILL DIE?!??!?! I went down a rabbit hole, hundreds of hours of podcasts and dozens and dozens of articles and books.

Much gratitude and respect to the podcasters who got me to this point. Brilliant hosts Lex Fridman, Dwarkesh Patel’s Lunar Society Podcast, Tom Bilieu’s Impact Thery, Flo Reid’s Unherd, Ed Mylett, Harry Stebbings, Bankless, Future of Life Institute Podcast, Eye on AI with Craig Smith, Liron Shapira, Robot Brains and more. Your work is foundational to this FOR HUMANITY podcast and whenever I use a clip from someone else’s podcast I will give full credit, promotion and thanks. Your work is foundational to this debate.

I am convinced, based on extensive research and decades of investigation by many leading experts, on our current course, human extinction due to Artificial Intelligence will happen in my lifetime, and most likely in the next two to ten years.

In Episode one, this podcast lays out how the makers of AI:
-openly admit their technology is not controllable
-openly admit they do not understand how it works
-openly admit it has the power to cause human extinction, within 1-50 years,
-and openly admit that they are focusing nearly all of their time and money making it stronger not safer.
-beg for regulation (who does that?) but there is no current government regulation, a ham sandwich sale is far more regulated.
-are trying to live out their childhood sci-fi fantasies


RESOURCES :
Follow Impact Theory with Tom Bilyeu on YouTube
https://www.youtube.com/playlist?list=PL8qcvQ7Byc3OJ02hbWJbHWePh4XEg3cvo
Follow Lex Fridman Podcast on Youtube
https://www.youtube.com/@lexfridman
Follow Dwarkesh Patel’s Podcast on YouTube
https://www.youtube.com/@DwarkeshPatel
Follow Eye on Ai with Craig Smith on YouTube
https://www.youtube.com/channel/UC-o9u9QL4zXzBwjvT1gmzNg
Follow Unherd Podcast on YouTube
https://www.youtube.com/@UnHerd/about
Follow Robot Brains Podcast on YouTube
https://www.youtube.com/@UCXNviQjBONXljxkJzNV-Xbw
Follow Bankless Podcast on Youtube
https://www.youtube.com/@Bankless
Follow Ed Mylette Podcast on YouTube
https://www.youtube.com/@EdMylettShow
Follow V20 w Harry Stebbings on YouTube
Follow the Future of Life Institute Podcast

Eleizer Yudkowky https://twitter.com/ESYudkowsky

Connor Leahy https://twitter.com/NPCollapse
➡️Conjecture Research https://www.conjecture.dev/research/
➡️EleutherAI Discord https://discord.com/invite/zBGx3azzUn
➡️Stop AGI https://www.stop.ai/

Max Tegmark's Twitter: https://twitter.com/tegmark
➡️Max's Website: https://space.mit.edu/home/tegmark
➡️Future of Life Institute: https://futureoflife.org

Mo Gawdat: Website: https://www.mogawdat.com/
➡️YouTube: / @mogawdatofficial
➡️Twitter: https://twitter.com/mgawdat
➡️Instagram: https://www.instagram.com/mo_gawdat/

Future of Life Institute
➡️ WEBSITE: https://futureoflife.org
➡️ TWITTER: https://twitter.com/FLIxrisk
➡️ LINKEDIN: https://www.linkedin.com/company/futu...

#AI #AISFAETY #AIRISK #OPENAI #ANTHROPIC #DEEPMIND #HUMANEXTINCTION
#CONNORLEAHY #ELEIZERYUDKOWSKY #MAXTEGMARK #MOGAWDAT #ROMANYAMPOLSKIY
...

The Alignment Problem: For Humanity, An AI Safety Podcast Episode #2

November 8, 2023 3:00 pm

For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.

Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

This is Episode #2: The Alignment Problem, that the makers of AI have no idea how to control their technology of align it with human values, goals, and ethics. For example, don't kill everyone.

Much gratitude and respect to the podcasters who got me to this point. Brilliant hosts Lex Fridman, Dwarkesh Patel’s Lunar Society Podcast, Tom Bilieu’s Impact Thery, Flo Reid’s Unherd, Ed Mylett, Harry Stebbings, Bankless, Future of Life Institute Podcast, Eye on AI with Craig Smith, Liron Shapira, Robot Brains and more. Your work is foundational to this FOR HUMANITY podcast and whenever I use a clip from someone else’s podcast I will give full credit, promotion and thanks. Your work is foundational to this debate.

I am convinced, based on extensive research and decades of investigation by many leading experts, on our current course, human extinction due to Artificial Intelligence will happen in my lifetime, and most likely in the next two to ten years.

This podcast lays out how the makers of AI:
-openly admit their technology is not controllable
-openly admit they do not understand how it works
-openly admit it has the power to cause human extinction, within 1-50 years,
-and openly admit and they are focusing nearly all of their time and money making it stronger not safer.
-beg for regulation (who does that?) but there is no current government regulation, a ham sandwich sale is far more regulated.
-are trying to live out their childhood sci-fi fantasies

PLEASE LIKE, SHARE, SUBSCRIBE AND COMMENT–WE HAVE NO TIME TO WASTE.

RESOURCES :
Follow Impact Theory with Tom Bilyeu on YouTube
Follow Lex Fridman Podcast on Youtube
Follow Dwarkesh Patel’s Podcast on YouTube
Follow Eye on Ai with Craig Smith on YouTube
Follow Unherd Podcast on YouTube
Follow Robot Brains Podcast on YouTube
Follow Bankless Podcast on Youtube
Follow Ed Mylette Podcast on YouTube
Follow V20 w Harry Stebbings on YouTube
Follow the Future of Life Institute Podcast

Eleizer Yudkowky https://twitter.com/ESYudkowsky

Connor Leahy https://twitter.com/NPCollapse
➡️Conjecture Research https://www.conjecture.dev/research/
➡️EleutherAI Discord https://discord.com/invite/zBGx3azzUn
➡️Stop AGI https://www.stop.ai/

Max Tegmark's Twitter: https://twitter.com/tegmark
➡️Max's Website: https://space.mit.edu/home/tegmark
➡️Pause Giant AI Experiments (open letter): https://futureoflife.org/open-letter/...
➡️Future of Life Institute: https://futureoflife.org

Mo Gawdat: Website: https://www.mogawdat.com/
➡️YouTube: / @mogawdatofficial
➡️Twitter: https://twitter.com/mgawdat
➡️Instagram: https://www.instagram.com/mo_gawdat/

Future of Life Institute
➡️ WEBSITE: https://futureoflife.org
➡️ TWITTER: https://twitter.com/FLIxrisk
➡️ INSTAGRAM: https://www.instagram.com/futureoflif…
➡️ META: https://www.facebook.com/futureoflife...
➡️ LINKEDIN: https://www.linkedin.com/company/futu...

#AI #AISFAETY #AIRISK #OPENAI #ANTHROPIC #DEEPMIND #HUMANEXTINCTION
#CONORLEAHY #EEIZERYUDKOWSKY #MAXTEGMARK #MOGAWDAT #ROMANYAMPOLSKY
...

The Interpretability Problem: For Humanity, An AI Safety Podcast Episode #3

November 15, 2023 7:17 am

Episode #3: The Interpretability Problem. In this episode we'll hear from AI Safety researchers including Eliezer Yudkowsky, Max Tegmark, Connor Leahy, Roman Yampolskiy, and many more discussing how current AI systems are black boxes, no one has any clue how they work inside.

For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.

Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.



#AI #airisk #alignment #interpretability #doom #aisafety #openai #anthropic #eleizeryudkowsky #maxtegmark #connorleahy
...

Dr. Roman Yampolskiy Interview, Part 1: For Humanity, An AI Safety Podcast Episode #4

November 22, 2023 6:08 am

In Episode #4, John Sherman interviews Dr. Roman Yampolskiy, Director of Cyber Security Laboratory in the Department of Computer Engineering and Computer Science at the Speed School of Engineering at the University of Louisville. Yampolskiy is the author of more than 100 publications, including numerous books.

Among the many topics discussed in this episode:
-why more average people aren't more involved and upset about AI safety
-how frontier AI capabilities workers go to work every day knowing their work risks human extinction and go back to work the next day
-how we can talk to our kids about these dark, existential issues
-what if AI safety researchers concerned about human extinction over AI are just somehow wrong?

For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.

Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

DR. ROMAN YAMPOLSKIY RESOURCES
Roman Yampolskiy's Twitter: https://twitter.com/romanyam
➡️Roman's YouTube Channel: https://www.youtube.com/c/RomanYampolskiy
➡️Pause Giant AI Experiments (open letter): https://futureoflife.org/open-letter/...
➡️Roman on Medium: https://romanyam.medium.com/

#ai #aisafety #airisk #humanextinction #romanyampolskiy #samaltman #openai #anthropic #deepmind
...

"Nationalize Big AI" Roman Yampolskiy Interview Part 2: For Humanity An AI Safety Podcast Episode #5

November 27, 2023 8:20 pm

In Episode #5 Part 2: John Sherman interviews Dr. Roman Yampolskiy, Director of the Cyber Security Lab at the University of Louisville, and renowned AI safety researcher.

Among the many topics discussed in this episode:
-what is at the core of AI safety risk skepticism
-why AI safety research leaders themselves are so all over the map
-why journalism is failing so miserably to cover AI safety appropriately
-the drastic step the federal government could take to really slow Big AI down

For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.

Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

ROMAN YAMPOLSKIY RESOURCES
Roman Yampolskiy's Twitter: https://twitter.com/romanyam
➡️Roman's YouTube Channel: https://www.youtube.com/c/RomanYampolskiy
➡️Pause Giant AI Experiments (open letter): https://futureoflife.org/open-letter/...
➡️Roman on Medium: https://romanyam.medium.com/

#ai #aisafety #airisk #humanextinction #romanyampolskiy #samaltman #openai #anthropic #deepmind
...

"Team Save Us vs Team Kill Us" For Humanity, An AI Safety Podcast Episode #6: The Munk Debate

December 6, 2023 9:22 am

In Episode #6, Team Save Us vs. Team Kill Us,, host John Sherman weaves together highlights and analysis of The Munk Debate on AI Safety to show the case for and against AI as a human extinction risk.

The debate took place in Toronto in June 2023, and it remains entirely current and relevant today and stands alone as one of the most well-produced, well-argued debates on AI Safety anywhere. All of the issues debated remain unsolved. All of the threats debated only grow in urgency.

In this Munk Debate, you’ll meet two teams: Max Tegmark and Yoshua Bengio on Team Save Us (John’s title not theirs), and Yann Lecun and Melanie Mitchell on Team Kill Us (they’re called pro/con in the debate, Kill v Save is all John). Host John Sherman adds in some current events and colorful analysis (and language) throughout.

This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. Let’s all it facts and analysis.

For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.

Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

RESOURCES

THE MUNK DEBATES: https://munkdebates.com

Max Tegmark
➡️X:

/ tegmark
➡️Max's Website: https://space.mit.edu/home/tegmark
➡️Pause Giant AI Experiments (open letter): https://futureoflife.org/open-letter/...
➡️Future of Life Institute: https://futureoflife.org

Yoshua Bengio
➡️Website: https://yoshuabengio.org/

Melanie Mitchell
➡️Website: https://melaniemitchell.me/
➡️X: https://x.com/MelMitchell1?s=20

Yann Lecun
➡️Google Scholar: https://scholar.google.com/citations?...
➡️X: https://x.com/ylecun?s=20

#AI #AISFAETY #AIRISK #OPENAI #ANTHROPIC #DEEPMIND #HUMANEXTINCTION #YANNLECUN #MELANIEMITCHELL #MAXTEGMARK #YOSHUABENGIO
...

What Do Moms Think About AI Extinction Risk? | AI Threat Awareness | Episode #7

December 14, 2023 6:11 pm

Thank you for tuning in to our video discussing AI threat awareness and what is the AI risk of human extinction.

In this video, we delve into the question: What is the AI risk of human extinction? Join us as we guide you through this important topic.

- Moms talk AI extinction risk
- Threat of human extinction from AI
- How moms are addressing AI extinction risks

*************************

You've heard all the tech bros. Now listen to the moms.

In Episode #7, "Moms Talk AI Extinction Risk" host John Sherman moves the AI Safety debate from the tech world to the real world.

30-something tech dudes believe they somehow have our authorization to toy with killing our children. And our children's yet unborn children too.

But they do not have this authorization.

Who will protect our children if not parents?

There is no parental, or maternal footing in the AI safety debate. But what if there was? What if we stopped and asked, wait, are you sure you want to do this? Someone (or more precisely everyone) might get hurt?

This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.

For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.

Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

**************************

Discover more of our video content on what is the AI risk of human extinction. You'll find additional insights on this topic along with relevant social media links.

YouTube: https://www.youtube.com/@ForHumanityPodcast
Website: http://www.storyfarm.com/

***************************

This video explores what is the AI risk of human extinction, moms talk AI extinction risk, threat of human extinction from AI, and how moms are addressing AI extinction risks.

Have I addressed your curiosity regarding what is the AI risk of human extinction?

We eagerly await your feedback and insights. Please drop a comment below, sharing your thoughts, queries, or suggestions about: moms talk AI extinction risk, threat of human extinction from AI, how moms are addressing AI extinction risks, and what is the AI risk of human extinction?
...

"AI's Top 3 Doomers" For Humanity, An AI Safety Podcast: Episode #8

December 22, 2023 3:40 pm

Who are the most dangerous "doomers" in AI? It's the people bringing the doom threat to the world, not the people calling them out for it.

In Episode #8, host Josh Sherman points fingers and lays blame. How is it possible we're actually really discussing a zero-humans-on-earth future? Meet the people making it happen, the real doomers.

This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.

For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.

Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years.

This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

#samaltman #darioamodei #yannlecun #ai #aisafety
...

Veteran Marine vs. AGI, For Humanity An AI Safety Podcast: Episode #9 Sean Bradley Interview

January 3, 2024 6:39 am

Do you believe the big AI companies when they tell you their work could kill every last human on earth? You are not alone. You are part of a growing general public that opposes unaligned AI capabilities development.

In Episode #9 TRAILER, we meet Sean Bradley, a Veteran Marine who served his country for six years, including as a helicopter door gunner. Sean left the service as a sergeant and now lives in San Diego where he is married, working and in college. Sean is a viewer of For Humanity and a member of our growing community of the AI risk aware.

This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.

For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.

Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

RESOURCES:

More on the little robot:

https://themessenger.com/tech/rob-rob...
...

"Eliezer Yudkowsky's 2024 Doom Update" For Humanity: An AI Safety Podcast, Episode #10

January 10, 2024 2:52 am

In Episode #10, AI Safety Research icon Eliezer Yudkowsky updates his AI doom predictions for 2024. After For Humanity host John Sherman tweeted at Eliezer, he revealed new timelines and predictions for 2024. Be warned, this is a heavy episode. But there is some hope and a laugh at the end.

Most important among them, he believes:
-Humanity no longer has 30-50 years to solve the alignment and interpretability problems, our broken processes just won't allow it
-Human augmentation is the only viable path for humans to compete with AGIs
-We have ONE YEAR, THIS YEAR, 2024, to mount a global WW2-style response to the extinction risk of AI.
-This battle is EASIER to win than WW2 :)

This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.

For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.

Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

RESOURCES:

Liron Shapira on Theo Jaffee Podcast
https://www.youtube.com/watch?v=YfEcAtHExFM
...

"Artist vs. AI Risk" For Humanity: An AI Safety Podcast Episode #11 Stephen Hanson Interview

January 17, 2024 12:56 am

In Episode #11, we meet Stephen Hanson, a painter and digital artist from Northern England. Stephen first became aware of AI risk in December 2022, and has spent 12+ months carrying the weight of it all. John and Steve talk about what it's like to have a family and how to talk to them about AI risk, what the future holds, and what we the AI Risk Realists can do to change the future, while keeping our sanity at the same time.

This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.

For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.

Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

Resources:

STEVE'S ART! stephenhansonart.bigcartel.com

Get ahead for next week and check out Theo Jaffee's Youtube Channel:

https://youtube.com/@theojaffee8530?si=kbK7jCvril5SMgfe
...

"AI Risk Debate" For Humanity: An AI Safety Podcast Episode #12 Theo Jaffee Interview

January 24, 2024 5:55 pm

In Episode #12, we have our first For Humanity debate!! John talks with Theo Jaffee, a fast-rising AI podcaster who is a self described “techno-optimist.” The debate covers a wide range of topics in AI risk.

This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.

For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.

Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

Resources

Theo’s YouTube Channel : https://youtube.com/@theojaffee8530?si=aBnWNdViCiL4ZaEg

Glossary: First Definitions by ChaptGPT4, I asked it to give answers simple enough elementary school student could understand( lol, I find this helpful often!), Commentaries by John Sherman

Reinforcement Learning with Human Feedback (RLHF):
Definition: RLHF, or Reinforcement Learning with Human Feedback, is like teaching a computer to make decisions by giving it rewards when it does something good and telling it what's right when it makes a mistake. It's a way for computers to learn and get better at tasks with the help of guidance from humans, just like how a teacher helps students learn. So, it's like a teamwork between people and computers to make the computer really smart!
Commentary: RLHF is widely seen as bullshit by AI safety researchers like Connor Leahy. When you give an AI model a thumbs up or thumbs down for its answer you are giving it RLHF. But Leahy says without knowing what’s happening in the black-box system, RLHF is not alignment work at all, it’s just blindly poking in the model the dark to get a different result that you also do not know how it arrived at.

Model Weights
Definiton: Model weights are like the special numbers that help a computer understand and remember things. Imagine it's like a recipe book, and these weights are the amounts of ingredients needed to make a cake. When the computer learns new things, these weights get adjusted so that it gets better at its job, just like changing the recipe to make the cake taste even better! So, model weights are like the secret ingredients that make the computer really good at what it does.
Commentary: Releasing the model weights of a model publicly, open-sourced, is very controversial. Meta and Yann LeCun are big fans of this, which makes me automatically opposed.

Foom/Fast Take-off:
Definition: "AI fast take-off" or "foom" refers to the idea that artificial intelligence (AI) could become super smart and powerful really quickly. It's like imagining a computer getting super smart all of a sudden, like magic! Some people use the word "foom" to talk about the possibility of AI becoming super intelligent in a short amount of time. It's a bit like picturing a computer going from learning simple things to becoming incredibly smart in the blink of an eye! Foom comes from cartoons, it’s the sound a super hero makes in comic books when they burst off the ground into flight.
Commentary: Many AI safety researchers think Foom is very possible. It simple means once the AI system begins recursively improve itself, all on its own, with no sleep and speed must fast than a human, and every increasing compute speed and power, within a matter of hours or days an Artificial General Intelligence could become an Artificial Super Intelligence, and we would lose control very quickly and potentially meet extinction very quickly.

Gradient Descent: Gradient descent is like a treasure hunt for the best way to do something. Imagine you're on a big hill with a metal detector, trying to find the lowest point. The detector beeps louder when you're closer to the lowest spot. In gradient descent, you adjust your steps based on these beeps to reach the lowest point on the hill, and in the computer world, it helps find the best values for a task, like making a robot walk smoothly or a computer learn better.

Orthogonality: Orthogonality is like making sure things are independent and don't mess each other up. Think of a chef organizing ingredients on a table – if each ingredient has its own space and doesn't mix with others, it's easier to work. In computers, orthogonality means keeping different parts separate, so changing one thing doesn't accidentally affect something else. It's like having a well-organized kitchen where each tool has its own place, making it easy to cook without chaos!
...

"Uncontrollable AI" For Humanity: An AI Safety Podcast, Episode #13 , Darren McKee Interview

January 30, 2024 7:58 pm

In Episode #13, “Uncontrollable AI”John Sherman interviews Darren McKee, author of Uncontrollable: The Threat of Artificial Superintelligence and the Race to Save the World.

Apologies for the laggy cam on Darren!

Darren’s book is an excellent resource, like this podcast it is intended for the general public.

This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.

For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.

Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

Resources:

Darren’s Book

https://www.amazon.com/Uncontrollable-Threat-Artificial-Superintelligence-World/dp/B0CPB1ZT2L/ref=sr_1_1?keywords=darren+mckee+uncontrollable&qid=1706532485&sr=8-1

JOBS IN AI: https://jobs.80000hours.org/

My Dad's Favorite Messiah Recording (3:22-6:-55 only lol!!)

https://www.youtube.com/watch?v=lFjQ77ol2DI&t=202s

Sample letter/email to an elected official:

Dear XXXX-

I'm a constituent of yours, I have lived in your district for X years. I'm writing today because I am gravely concerned about the existential threat to humanity from Artificial Intelligence. It is the most important issue in human history, nothing else is close.

Have you read the 22-word statement from the Center for AI Safety on 5/31/23 that Sam Altman and all the big AI CEOs signed? It reads: "Mitigating the risk of extinction from AI should be a global priority alongside other societal scale risks such as pandemics and nuclear war."

Do you believe them? If so, what are you doing to prevent human extinction? If not, why don't you believe them?

Most prominent AI safety researchers say the default outcome, if we do not make major changes right now, is that AI will kill every living thing on earth, within 1-50 years. This is not science fiction or hyperbole. This is our current status quo.

It's like a pharma company saying they have a drug they say can cure all diseases, but it hasn't been through any clinical trials and it may also kill anyone who takes it. Then, with no oversight or regulation, they have put the new drug in the public water supply.

Big AI is making tech they openly admit they cannot control, do not understand how it works, and could kill us all. Their resources are 99:1 on making the tech stronger and faster, not safer. And yet they move forward, daily, with no oversight or regulation.

I am asking you to become a leader in AI safety. Many policy ideas could help, and you could help them become law. Things like liability reform so AI companies are liable for harm, hard caps on compute power, and tracking and reporting of all chip locations at a certain level.

I'd like to discuss this with you or someone from your office over the phone or a Zoom. Would that be possible?

Thanks very much.

XXXXXX
Address
Phone
...

"Pause AI or Die" For Humanity: An AI Safety Podcast Episode #14, Joep Meindertsma Interview

February 7, 2024 2:14 pm

In Episode #14, John interviews Joep Meinderstma, Founder of Pause AI, a global AI safety policy and protest organization. Pause AI was behind the first ever AI Safety protests on the planet.

John and Joep talk about what's being done, how it all feels, how it all might end, and even broach the darkest corner of all of this: suffering risk. This conversation embodies a spirit this movement needs: we can be upbeat and positive as we talk about the darkest subjects possible. It's not "optimism" to race to build suicide machines, but it is optimism to assume the best, and to believe we can and must succeed no matter what the odds.

This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.

For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.

Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.



Resources:

https://pauseai.info/

https://discord.gg/pVMWjddaW7

Sample Letter to Elected Leaders:

Dear XXXX-

I'm a constituent of yours, I have lived in your district for X years. I'm writing today because I am gravely concerned about the existential threat to humanity from Artificial Intelligence. It is the most important issue in human history, nothing else is close.

Have you read the 22-word statement from the Center for AI Safety on 5/31/23 that Sam Altman and all the big AI CEOs signed? It reads: "Mitigating the risk of extinction from AI should be a global priority alongside other societal scale risks such as pandemics and nuclear war."

Do you believe them? If so, what are you doing to prevent human extinction? If not, why don't you believe them?

Most prominent AI safety researchers say the default outcome, if we do not make major changes right now, is that AI will kill every living thing on earth, within 1-50 years. This is not science fiction or hyperbole. This is our current status quo.

It's like a pharma company saying they have a drug they say can cure all diseases, but it hasn't been through any clinical trials and it may also kill anyone who takes it. Then, with no oversight or regulation, they have put the new drug in the public water supply.

Big AI is making tech they openly admit they cannot control, do not understand how it works, and could kill us all. Their resources are 99:1 on making the tech stronger and faster, not safer. And yet they move forward, daily, with no oversight or regulation.

I am asking you to become a leader in AI safety. Many policy ideas could help, and you could help them become law. Things like liability reform so AI companies are liable for harm, hard caps on compute power, and tracking and reporting of all chip locations at a certain level.

I'd like to discuss this with you or someone from your office over the phone or a Zoom. Would that be possible?

Thanks very much.

XXXXXX
Address
Phone
...

"AI Risk Super Bowl I: Conner vs. Beff" For Humanity, An AI Safety Podcast Episode #15

February 14, 2024 7:43 am

In Episode #15, AI Risk Superbowl I: Conner vs. Beff, Highlights and Post-Game Analysis, John takes a look at the recent debate on the Machine Learning Street Talk Podcast between AI safety hero Connor Leahy and Acceleration cult leader Beff Jezos, aka Guillaume Vendun. The epic three hour debate took place on 2/2/24.

With a mix of highlights and analysis, John, with Beff’s help, reveals the truth about the e/acc movement: it’s anti-human at its core.

This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.

For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.

Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

Resources:

Machine Learning Street Talk - YouTube

Full Debate, e/acc Leader Beff Jezos vs Doomer Connor Leahy
e/acc Leader Beff Jezos vs Doomer Connor Leahy

How Guillaume Verdon Became BEFF JEZOS, Founder of e/acc
How Guillaume Verdon Became BEFF JEZOS, Founder of e/acc

Guillaume Verdon: Beff Jezos, E/acc Movement, Physics, Computation & AGI | Lex Fridman Podcast #407
Guillaume Verdon: Beff Jezos, E/acc Movement, Physics, Computation & AGI | Lex Fridman Podcast #407

Next week’s guest Timothy Lee’s Website and related writing:
https://www.understandingai.org/
https://www.understandingai.org/p/why-im-not-afraid-of-superintelligent
https://www.understandingai.org/p/why-im-not-worried-about-ai-taking
...

"AI Risk-Denier Down" For Humanity, An AI Safety Podcast Episode #16

February 21, 2024 4:19 am

In Episode #16, AI Risk Denier Down, things get weird.

This show did not have to be like this. Our guest in Episode #16 is Timothy Lee, a computer scientist and journalist who founded and runs understandingai.org. Tim has written about AI risk many times, including these two recent essays:

https://www.understandingai.org/p/why...

https://www.understandingai.org/p/why...

Tim was not prepared to discuss this work, which is when things started to get off the rails.

For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.

Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

MY QUESTIONS FOR TIM (We didn’t even get halfway through lol, Youtube wont let me put all of them so I'm just putting the second essay questions)

OK lets get into your second essay "Why I'm not afraid of superintelligent AI taking over the world" from 11/15/23

-You find Nick Bostrom’s example of algorithms beating humans at chess being a sign that algorithms could beat humans at everything to be unconvincing.You wrote: “Chess is a game of perfect information and simple, deterministic rules. This means that it’s always possible to make chess software more powerful with better algorithms and more computing power.”

-You find chess as a striking example of how AI will not take over the world-But I’d like to talk about AI safety researcher Steve Omohundro’s take on chess-He says if you had an unaligned AGI you asked to get better at chess, it would first break into other servers to steal computing power so it would be better at Chess. Then when you discover this and try to stop it by turning it off, it sees your turning it off as a threat to it’s improving at chess, so it murders you.
-Where is he wrong?

-You wrote: “Think about a hypothetical graduate student. Let’s say that she was able to reach the frontiers of physics knowledge after reading 20 textbooks. Could she have achieved a superhuman understanding of physics by reading 200 textbooks? Obviously not. Those extra 180 textbooks contain a lot of words, they don’t contain very much knowledge she doesn’t already have. So too with AI systems. I suspect that on many tasks, their performance will start to plateau around human-level performance. Not because they “run out of data,” but because they reached the frontiers of human knowledge.”
-In this you seem to assume that any one human is capable of mastering all of knowledge in a subject area better than any AI, because you seem to believe that one human is capable of holding ALL of the knowledge available on a given subject.
-This is ludicrous to me. You think humans are far too special.
-AN AGI WILL HAVE READ EVERY BOOK EVER WRITTEN. MILLIONS OF BOOKS. ACTIVELY CROSS-REFERENCING ACROSS EVERY DISCIPLINE.
-How could any humans possibly compete with an AGI system than never sleeps and can read every word ever written in any language? No human could ever do this.
-Are you saying humans are the most perfect vessels of knowledge consumption possible in the universe?
-A human who has read 1000 books on one area can compete with an AGI who has read millions of books in thousands of areas for knowledge? Really?
-You wrote: “AI safetyists assume that all problems can be solved with the application of enough brainpower. But for many problems, having the right knowledge matters more. And a lot of economically significant knowledge is not contained in any public data set. It’s locked up in the brains and private databases of millions of individuals and organizations spread across the economy and around the world.”
-Why do you assume an unaligned AGI would not raid every private database on earth in a very short time and take in all this knowledge you find so special?
-Does this claim rest on the security protocols of the big AI companies?
-Security protocols, even at OpenAI, are seen to be highly vulnerable to large-scale nation-state hacking. If China could hack into OpenAI, and AGI could surely hack into either or anything. An AGI’s ability to spot and exploit vulnerabilities in human written code is widely predicted.

-You wrote than in the end “we’ll get a pluralistic and competitive economy that’s not too different from the one we have now.”
-Do you really believe this? That post AGI economy will be “not too different from The one we have now???” Literally nobody other than you is saying this.

-Lets see if we can leave this conversation with a note of agreement. Is there anything you think we can agree on?
...

"AI Risk=Jenga" For Humanity, An AI Safety Podcast Episode #17, Liron Shapira Interview

February 28, 2024 3:51 pm

In Episode #17, AI Risk + Jenga, Liron Shapira Interview, John talks with tech CEO and AI Risk Activist Liron Shapira about a broad range of AI risk topics centered around existential risk. Liron likens AI Risk to a game of Jenga, where there are a finite number of pieces, and each one you pull out leaves you one closer to collapse. He says something like Sora, seemingly just a video innovation, could actually end all life on earth.

This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.

For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.

Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

Resources:

PAUSE AI DISCORD
https://discord.gg/pVMWjddaW7

Liron's Youtube Channel:
https://youtube.com/@liron00?si=cqIo5DUPAzHkmdkR

More on rationalism:
https://www.lesswrong.com/

More on California State Senate Bill SB-1047:
https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202320240SB1047&utm_source=substack&utm_medium=email

https://thezvi.substack.com/p/on-the-proposed-california-sb-1047?utm_source=substack&utm_medium=email

Warren Wolf
Warren Wolf, "Señor Mouse" - The Checkout: Live at Berklee
https://youtu.be/OZDwzBnn6uc?si=o5BjlRwfy7yuIRCL
...

“Worse Than Extinction, CTO vs. S-Risk” For Humanity, An AI Safety Podcast Episode #18

March 6, 2024 3:07 pm

In Episode #18, “Worse Than Extinction, CTO vs. S-Risk” Louis Berman Interview, John talks with tech CTO Louis Berman about a broad range of AI risk topics centered around existential risk. The conversation goes to the darkest corner of the AI risk debate, S-risk, or suffering risk.

This episode has a lot in it that is very hard to hear. And say.

The tech CEOs are spinning visions of abundance and utopia for the public. Someone needs to fill in the full picture of the realm of possibilities, no matter how hard it is to hear. Do not fear the truth, fear ignorance.

This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.

For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.

Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

RESOURCES:

John's Upcoming Talk in Philadelphia!
It is open to the public, you will need to make a free account at meetup.com
https://www.meetup.com/philly-net/events/298710679/

Excellent Background on S-Risk w supporting links
https://80000hours.org/problem-profiles/s-risks/

Join the Pause AI Weekly Discord Thursdays at 2pm EST
https://discord.gg/pVMWjddaW7
...

“David Shapiro AI-Risk Interview” For Humanity: An AI Safety Podcast Episode #19

March 13, 2024 3:05 pm

Interview starts at 9:23

In Episode #19, “David Shapiro Interview” John talks with AI/Tech YouTube star David Shapiro. David has several successful YouTube channels. His main channel (link below: go follow him!), with more than 140k subscribers, is a constant source of new AI and AGI and post-labor economy-related video content. Dave does a great job breaking things down.

But a lot Dave’s content is about a post AGI future. And this podcast’s main concern is that we won’t get there, cuz AGI will kill us all first. So this show is a two part conversation, first about if we can live past AGI, and second, about the issues we’d face in a world where humans and AGIs are co-existing. In this trailer, Dave gets to the edge of giving his (p)-doom.

John and David discuss how humans can stay in in control of a superintelligence, what their p-dooms are, and what happens to the energy companies if fusion is achieved, among many topics.

This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.

For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.

Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

RESOURCES:

FOLLOW DAVID SHAPIRO ON YOUTUBE!
https://youtube.com/@DaveShap?si=o_USH-v0fDyo23fm

DAVID’S OTHER LINKS:

Patreon (and Discord)
patreon.com/daveshap

Substack (Free)
daveshap.substack.com

GitHub (Open Source)
github.com/daveshap

Systems Thinking Channel
youtube.com/@Systems.Thinking

Mythic Archetypes Channel
youtube.com/@MythicArchetypes

Pragmatic Progressive Channel
youtube.com/@PragmaticProgressive

Sacred Masculinity Channel
youtube.com/@Sacred.Masculinity
...

“AI Risk Realist vs. Coding Cowboy” For Humanity: An AI Safety Podcast Episode #20

March 20, 2024 1:52 pm

Full interview starts at 14:48

In Episode #20 “AI Safety Debate: Risk Realist vs Coding Cowboy” John Sherman debates AI risk with lifelong coder and current Chief AI Officer Mark Tellez. The full show conversation covers issues like can AI systems be contained to the digital world, whether we should build data centers with explosives lining the walls just in case, and whether the AI CEOs are just big liars. Mark believes we are on a safe course, and when that changes, we will have time to react. John disagrees. What follows is a candid and respectful exchange of ideas.

This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.

For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.

Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

Community Note: So, after much commentary, I have done away with the Doom Rumble during the trailers. I like(d) it, I think it adds some drama, but the people have spoken, and it is dead. RIP Doom Rumble, 2023--2024. Also I had a bit of a head cold at the time of some of the recording and sound a little nasal in the open and close, my apologies lol, but a few sniffles can’t stop this thing!!

RESOURCES:

Time Article on the New Report:
AI Poses Extinction-Level Risk, State-Funded Report Says | TIME

John's Upcoming Talk in Philadelphia!
It is open to the public, you will need to make a free account at meetup.com
https://www.meetup.com/philly-net/eve...

FOLLOW DAVID SHAPIRO ON YOUTUBE!
David Shapiro - YouTube

Dave Shapiro’s New Video where he talks about For Humanity
AGI: What will the first 90 days be like? And more VEXING questions from the audience!

22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS

Pause AI
Pause AI

Join the Pause AI Weekly Discord Thursdays at 3pm EST
/ discord
...

“Why AI Killing You Isn’t On The News” For Humanity: An AI Safety Podcast Episode #21

March 27, 2024 2:54 pm

“Why AI Killing You Isn’t On The News” For Humanity: An AI Safety Podcast Episode #21

Interview starts at 20:10
Some highlights of John’s news career start at 9:14

In In Episode #21, host John Sherman and WJZY-TV News Director Casey Clark explore the significant underreporting of AI's existential risks in the media. They recount a disturbing incident where AI bots infiltrated a city council meeting, spewing hateful messages. The conversation delves into the challenges of conveying the complexities of artificial general intelligence to the public and the media's struggle to present such abstract concepts compellingly. They predict job losses as the first major AI-related news story to break through and speculate on the future of AI-generated news anchors, emphasizing the need for human reporters in the field.

TIMESTAMPS:
**AI Existential Risk Underreported (00:00:00)**
Brief on AI's existential risk being overlooked by media.

**AI's Influence on News (00:00:24)**
AI's effect on news content and algorithmic manipulation risks.

**Communicating AI Risks (00:02:02)**
Challenges in making AI risks relatable to the public.

**Microsoft Talk Experience (00:04:31)**
John Sherman's insights from a Microsoft center talk on AI risks.

**News Reporting Career (00:08:51)**
John Sherman's award-winning environmental reporting background.

**Casey Clark Interview (00:19:44)**
Casey Clark on why AI risks are missing from news discussions.

**Public's AI Awareness (00:25:43)**
Limited public knowledge of artificial general intelligence threats.

**Urgency of AI Risk Discussion (00:26:13)**
Comparing AI risk ignorance to historical underestimation of atomic bomb dangers.

**AI's News Content Influence (00:27:23)**
AI's potential to skew news content and public opinion.

**AI Complexity in Reporting (00:30:43)**
TV reporters' struggle with complex AI topics leads to avoidance.

**AI's Decision-Making Influence (00:32:48)**
Concerns over AI algorithm bias in news content.

**Introducing AI Stories in News (00:36:32)**
Obstacles in simplifying AI topics for the public and newsrooms.

**AI Coverage Influences (00:39:44)**
Corporate and financial interests affecting AI risk reporting.

**AI Specialization in Journalism (00:41:01)**
Potential for dedicated AI beat reporters in newsrooms.

**Pitching AI Stories to News (00:43:05)**
Strategies for pitching AI stories with clear, visual messages.

**Protest Coverage Tactics (00:45:52)**
Making protests appealing to media through mainstream visuals.

**AI's Role in Job Loss (00:48:27)**
The contrast in news companies' approaches to AI and job impacts.

**Journalism's Decline (00:49:53)**
The effect of losing experienced news anchors on journalism.

**Reporting Challenges (00:52:19)**
The effect of inexperienced reporters on journalism quality.

**Investigative Reporting Value (00:52:42)**
The importance of investigative journalism in government accountability.

**Understanding Exponential Change (00:56:02)**
The human struggle to grasp exponential change and its relevance to AI regulation.

**AI Regulation Challenges (00:57:11)**
The complexities and motivations behind AI regulation.

**AI Governance Dilemma (00:58:27)**
The balance between government control and tech company accountability in AI governance.

**Media Engagement on AI Risks (01:00:33)**
How to engage media to increase AI risk awareness.

**Individual Impact on AI Awareness (01:03:00)**
Encouraging personal stories to highlight AI risks in media.

**Empowering Media Engagement (01:04:00)**
Guiding individuals to effectively communicate AI concerns to media.

**Success Story (01:06:17)**
An immigrant's inspiring success tale emphasizing perseverance

**Self-Belief Reflection (01:11:46)**
Reflecting on the importance of self-confidence and honoring past sacrifices.

This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.

For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.

Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

RESOURCES:

JOIN THE FIGHT, help Pause AI!!!!
Pause AI

Join the Pause AI Weekly Discord Thursdays at 3pm EST
/ discord

See more of John’s Talk in Philly:
https://x.com/ForHumanityPod/status/1772449876388765831?s=20

FOLLOW DAVID SHAPIRO ON YOUTUBE!
David Shapiro - YouTube

22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
...

Episode #22 - “Sam Altman: Unelected, Unvetted, Unaccountable” For Humanity: An AI Safety Podcast

April 3, 2024 1:44 pm

In episode #22, host John Sherman critically examines Sam Altman's role as CEO of OpenAI, focusing on the ethical and safety challenges of AI development. The discussion critiques Altman's lack of public accountability and the risks his decisions pose to humanity. Concerns are raised about the governance of AI, the potential for AI to cause harm, and the need for safety measures and regulations. The episode also explores the societal impact of AI, the possibility of AI affecting the physical world, and the importance of public awareness and engagement in AI risk discussions. Overall, the episode emphasizes the urgency of responsible AI development and the crucial role of oversight.

This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.

For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.

Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

RESOURCES:

Vanity Fair Gushes in 2015

Business Insider: Sam Altman’s Act May Be Wearing Thin

Oprah and Maya Angelou

Best Account on Twitter: AI Notkilleveryoneism Memes

JOIN THE FIGHT, help Pause AI!!!!
Pause AI

Join the Pause AI Weekly Discord Thursdays at 3pm EST
/ discord

22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS

Timestamps:
The man who holds the power (00:00:00) Discussion about Sam Altman's power and its implications for humanity.
The safety crisis (00:01:11) Concerns about safety in AI technology and the need for protection against potential risks.
Sam Altman's decisions and vision (00:02:24) Examining Sam Altman's role, decisions, and vision for AI technology and its impact on society.
Sam Altman's actions and accountability (00:04:14) Critique of Sam Altman's actions and accountability regarding the release of AI technology.
Reflections on getting fired (00:11:01) Sam Altman's reflections and emotions after getting fired from OpenAI's board.
Silencing of concerns (00:19:25) Discussion about the silencing of individuals concerned about AI safety, particularly Ilya Sutskever.
Relationship with Elon Musk (00:20:08) Sam Altman's sentiments and hopes regarding his relationship with Elon Musk amidst tension and legal matters.
Legal implications of AI technology (00:22:23) Debate on the fairness of training AI under copyright law and its legal implications.
The value of data (00:22:32) Sam Altman discusses the compensation for valuable data and its use.
Safety concerns (00:23:41) Discussion on the process for ensuring safety in AI technology.
Broad definition of safety (00:24:24) Exploring the various potential harms and impacts of AI, including technical, societal, and economic aspects.
Lack of trust and control (00:27:09) Sam Altman's admission about the power and control over AGI and the need for governance.
Public apathy towards AI risk (00:31:49) Addressing the common reasons for public inaction regarding AI risk awareness.
Celebration of life (00:34:20) A personal reflection on the beauty of music and family, with a message about the celebration of life.
Conclusion (00:38:25) Closing remarks and a preview of the next episode.
...

Episode #23 - “AI Acceleration Debate” For Humanity: An AI Safety Podcast

April 10, 2024 2:30 pm

FULL INTERVIEW STARTS AT (00:22:26)

e/acc: Suicide or Salvation? In episode #23, AI Risk-Realist John Sherman and Accelerationist Paul Leszczynski debate AI accelerationism, the existential risks and benefits of AI, questioning the AI safety movement and discussing the concept of AI as humanity's child. They talk about whether AI should align with human values and the potential consequences of alignment. Paul has some wild views, including that AI safety efforts could inadvertently lead to the very dangers they aim to prevent. The conversation touches on the philosophy of accelerationism and the influence of human conditioning on our understanding of AI.

This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.

For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.

Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

TIMESTAMPS:

TRAILER (00:00:00)

INTRO: (00:5:40)

INTERVIEW:
Paul Luzinski Interview (00:22:36) John Sherman interviews AI advocate Luzinski.
YouTube Channel Motivation (00:24:14) Luzinski's pro-acceleration channel reasons.
AI Threat Viewpoint (00:28:24) Luzinski on AI as existential threat.
AI Impact Minority Opinion (00:32:23) Luzinski's take on AI's minority view impact.
Tech Regulation Need (00:33:03) Regulatory oversight on tech startups debated.
Post-2008 Financial Regulation (00:34:16) Financial regulation effects and big company influence discussed.
Tech CEOs' Misleading Claims (00:36:31) Tech CEOs' public statement intentions.
Social Media Influence (00:38:09) Social media's advertising effectiveness.
AI Risk Speculation (00:41:32) Potential AI risks and regulatory impact.
AI Safety Movement Integrity (00:43:53) AI safety movement's motives challenged.
AI Alignment: Business or Moral? (00:47:27) AI alignment as business or moral issue.
AI Safety Advocacy Debate (00:48:15) Is AI safety advocacy moral or business?
AI Risk vs. Science (00:49:25) AI risk compared to religion and science.
AI Utopia Belief (00:51:21) AI accelerationism as belief or philosophy.
AI Doomsday Believer Types (00:53:27) Four types of AI doomsday believers.
AI Doomsday Belief Authenticity (00:54:22) Are AI doomsday believers genuine?
Geoffrey Hinton's AI Regret (00:57:24) Hinton's regret over AI work.
AI's Self-Perception (00:58:57) Will AI see itself as part of humanity?
AGI's Conditioning Debate (01:00:22) AGI's training vs. human-like start.
AGI's Human Qualities (01:01:52) AGI's ability to think and learn.
Text's Influence on AGI (01:02:13) Human text shaping AGI's values.
AGI's Curiosity Goal (01:05:32) AGI's curiosity and safety implications.
AGI Self-Inquiry Control (01:07:55) Should humans control AGI's learning?
AGI's Independent Decisions (01:11:33) Risks of AGI's autonomous actions.
AGI's View on Humans (01:15:47) AGI's potential post-singularity view of humans.
AI Safety Criticism (01:16:24) Critique of AI safety assumptions.
AI Engineers' Concerns (01:19:15) AI engineers' views on AI's dangers.
Techno Utopia Case (01:22:27) AI's role in a potential utopia.
Tech's Net Impact (01:26:11) Point where tech could harm society.
Accelerating AI Argument (01:29:28) Pros of fast AI development.
AGI's Training Impact (01:31:49) Effect of AGI's training data origin.
AI Development Cap (01:32:34) Theoretical limit of AI intelligence.
Intelligence Types (01:33:39) Intelligence beyond academics.
AGI's National Loyalty (01:40:16) AGI's allegiance to its creator nation.
Tech CEOs' Trustworthiness (01:44:13) Tech CEOs' trust in AI development.
Reflections on Discussion (01:47:12) Thoughts on the AI risk conversation.
Next Guest & Engagement (01:49:50) Introduction of next guest and call to action.

RESOURCES:

Paul’s Nutty Youtube Channel: Accel News Network
https://www.youtube.com/@DrDragon91

Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes

JOIN THE FIGHT, help Pause AI!!!!
Pause AI https://pauseai.info/
...

Episode #24 - “YOU can help save the world from AI Doom” For Humanity: An AI Safety Podcast

April 17, 2024 2:41 pm

TRAILER (00:00:00-00-:05:20)
FULL INTERVIEW STARTS AT (00:08:05)

Episode #24 - “YOU can help save the world from AI Doom” For Humanity: An AI Safety Podcast

In episode #24, host John Sherman and Nonlinear Co-founder Kat Woods discusses the critical need for prioritizing AI safety in the face of developing superintelligent AI. In this conversation, Kat and John discuss the topic of AI safety and the potential risks associated with artificial superintelligence. Kat shares her personal transformation from being a skeptic to becoming an advocate for AI safety. They explore the idea that AI could pose a near-term threat rather than just a long-term concern. They also discuss the importance of prioritizing AI safety over other philanthropic endeavors and the need for talented individuals to work on this issue. Kat highlights potential ways in which AI could harm humanity, such as creating super viruses or starting a nuclear war. They address common misconceptions, including the belief that AI will need humans or that it will be human-like.

Overall, the conversation emphasizes the urgency of addressing AI safety and the need for greater awareness and action. They also discuss the importance of funding AI safety research and the need for better regulation. The conversation ends on a hopeful note, with the speakers expressing optimism about the growing awareness and concern regarding AI safety.

This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.

For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.

Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

TIMESTAMPS:

AI Safety Urgency (00:00:00) Emphasizing the immediate need to focus on AI safety.
Superintelligent AI World (00:00:50) Considering the impact of AI smarter than humans.
AI Safety Charities (00:02:37) The necessity for more AI safety-focused charities.
Personal AI Safety Advocacy Journey (00:10:10) Kat Woods' transformation into an AI safety advocate.
AI Risk Work Encouragement (00:16:03) Urging skilled individuals to tackle AI risks.
AI Safety's Global Impact (00:17:06) AI safety's pivotal role in global challenges.
AI Safety Prioritization Struggles (00:18:02) The difficulty of making AI safety a priority.
Wealthy Individuals and AI Safety (00:19:55) Challenges for the wealthy in focusing on AI safety.
Superintelligent AI Threats (00:23:12) Potential global dangers posed by superintelligent AI.
Limits of Imagining Superintelligent AI (00:28:02) The struggle to fully grasp superintelligent AI's capabilities.
AI Containment Risks (00:32:19) The problem of effectively containing AI.
AI's Human-Like Risks (00:33:53) Risks of AI with human-like qualities.
AI Dangers (00:34:20) Potential ethical and safety risks of AI.
Nonlinear's Role in AI Safety (00:39:41) Nonlinear's contributions to AI safety work.
AI Safety Donations (00:41:53) Guidance on supporting AI safety financially.
Diverse AI Safety Recruitment (00:45:23) The need for varied expertise in AI safety.
AI Safety Rebranding (00:47:09) Proposing "AI risk" for clearer communication.
Effective Altruism and AI Safety (00:49:43) The relationship between effective altruism and AI safety.
AI Safety Complexity (00:52:12) The intricate nature of AI safety issues.
AGI Curiosity and Control (00:52:34) The balance of AGI's curiosity and human oversight.
AI Superintelligence Urgency (00:53:52) The critical timing and power of AI superintelligence.
AI Safety Work Perception (00:56:06) Changing the image of AI safety efforts.
AI Safety and Government Regulation (00:59:23) The potential for regulatory influence on AI safety.
Entertainment's AI Safety Role (01:04:24) How entertainment can promote AI safety awareness.
AI Safety Awareness Progress (01:05:37) Growing recognition and response to AI safety.
AI Safety Advocacy Funding (01:08:06) The importance of financial support for AI safety advocacy.
Effective Altruists and Rationalists Views (01:10:22) The stance of effective altruists and rationalists on AI safety.
AI Risk Marketing (01:11:46) The case for using marketing to highlight AI risks.

RESOURCES:

Nonlinear: https://www.nonlinear.org/

Best Account on Twitter: AI Notkilleveryoneism Memes

JOIN THE FIGHT, help Pause AI!!!!
Pause AI

Join the Pause AI Weekly Discord Thursdays at 3pm EST
/ discord

22 Word Statement from Center for AI Safety
Statement on AI Risk | CAISco
...

Episode #25 - “Does The AI Safety Movement Have It All Wrong?” For Humanity: An AI Safety Podcast

April 24, 2024 1:06 pm

FULL INTERVIEW STARTS AT (00:08:20)

Episode #25 - “Does The AI Safety Movement Have It All Wrong?” Dr. Émile Torres Interview, For Humanity: An AI Safety Podcast

DONATE HERE TO HELP PROMOTE THIS SHOW
https://www.paypal.com/paypalme/forhumanitypodcast

In episode #25, host John Sherman and Dr. Emile Torres explore the concept of humanity's future and the rise of artificial general intelligence (AGI) and machine superintelligence. This is really thought-provoking stuff, as are the supporting links under RESOURCES.

Dr. Torres lays out his view that the AI safety movement has it all wrong on existential threat. Concerns are voiced about the potential risks of advanced AI, questioning the effectiveness of AI safety research and the true intentions of companies like OpenAI. Dr. Torres supports a full "stop AI" movement, doubting the benefits of pursuing such powerful AI technologies and highlighting the potential for catastrophic outcomes if AI systems become misaligned with human values or not. The discussion also touches on the urgency of solving AI control problems to avoid human extinction.

Émile P. Torres is a philosopher whose research focuses on existential threats to civilization and humanity. They have published widely in the popular press and scholarly journals, with articles appearing in the Washington Post, Aeon, Bulletin of the Atomic Scientists, Metaphilosophy, Inquiry, Erkenntnis, and Futures.

This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.

For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.

Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

TIMESTAMPS:

**The definition of human extinction and AI Safety Podcast Introduction (00:00:00)**.

**Paul Christiano's perspective on AI risks and debate on AI safety (00:03:51)**

**Interview with Dr. Emil Torres on transhumanism, AI safety, and historical perspectives (00:08:17)**

**Challenges to AI safety concerns and the speculative nature of AI arguments (00:29:13)**

**AI's potential catastrophic risks and comparison with climate change (00:47:49)**

**Defining intelligence, AGI, and unintended consequences of AI (00:56:13)**

**Catastrophic Risks of Advanced AI and perspectives on AI Safety (01:06:34)**

**Inconsistencies in AI Predictions and the Threats of Advanced AI (01:15:19)**

**Curiosity in AGI and the ethical implications of building superintelligent systems (01:22:49)**

**Challenges of discussing AI safety and effective tools to convince the public (01:27:26)**

**Tangible harms of AI and hopeful perspectives on the future (01:37:00)**

**Parental instincts and the need for self-sacrifice in AI risk action (01:43:53)**

RESOURCES:

THE TWO MAIN PAPERS ÉMILE LOOKS TO MAKING HIS CASE:

Against the singularity hypothesis By David Thorstad:
https://philpapers.org/archive/THOATS-5.pdf

Challenges to the Omohundro—Bostrom framework for AI motivations By Oleg Häggstrom: https://www.math.chalmers.se/~olleh/ChallengesOBframeworkDeanonymized.pdf

Paul Christiano on Bankless
How We Prevent the AI’s from Killing us with Paul Christiano

Emile Torres TruthDig Articles:
https://www.truthdig.com/author/emile-p-torres/

Dr. Torres Book: Human Extinction (Routledge Studies in the History of Science, Technology and Medicine) 1st Edition
https://www.amazon.com/Human-Extinction-Annihilation-Routledge-Technology/dp/1032159065

Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes

JOIN THE FIGHT, help Pause AI!!!!
Pause AI

Join the Pause AI Weekly Discord Thursdays at 3pm EST
/ discord
https://discord.com/invite/pVMWjddaW7

22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
https://www.safe.ai/work/statement-on-ai-risk
...

Episode #26 - “Pause AI Or We All Die” Holly Elmore Interview, For Humanity: An AI Safety Podcast

May 1, 2024 3:23 pm

Please Donate Here To Help Promote This Show
https://www.paypal.com/paypalme/forhumanitypodcast

FULL INTERVIEW STARTS AT (00:09:55)

In episode #26, host John Sherman and Pause AI US Founder Holly Elmore talk about AI risk. They discuss how AI surprised everyone by advancing so fast, what it’s like for employees at OpenAI working on safety, and why it’s so hard for people to imagine what they can’t imagine.

This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.

For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.

Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

TIMESTAMPS:
**Progress in Artificial Intelligence (00:00:00)**
Discussion about the rapid progress in AI, its impact on AI safety, and revisiting assumptions.

**Introduction to AI Safety Podcast (00:00:49)**
Introduction to the "For Humanity and AI Safety" podcast, its focus on human extinction threat from AI, and revising AI risk percentages.

**Need for Compute Cap Regulations (00:04:16)**
Discussion about the need for laws to cap compute power used by big AI companies, ethical implications, and the appointment of Paul Christiano to a new AI safety governmental agency.

**Personal Journey into AI Risk Awareness (00:15:26)**
Holly Elmore's personal journey into AI risk awareness, understanding AI risk, humility, and the importance of recognizing unexperienced events' potential impact.

**The Overton Window Shift and Imagination Limitation (00:22:05)**
Discussion on societal reactions to dramatic changes and the challenges of imagining the potential impact of artificial intelligence.

**OpenAI's Approach to AI Safety (00:25:53)**
Discussion on OpenAI's strategy for creating AI, the mindset at OpenAI, and the internal dynamics within the AI safety community.

**The History and Evolution of AI Safety Community (00:41:37)**
Discusses the origins and changes in the AI safety community, engaging the public, and ethical considerations in AI safety decision-making.

**Impact of Technology on Social Change (00:51:47)**
Explores differing perspectives on the role of technology in driving social change, perception of technology, and progress.

**Challenges and Opportunities in AI Adoption (01:02:42)**
Explores the possibility of a third way in AI adoption, the effectiveness of protests, and concerns about AI safety.

Resources:

Azeer Azar+Connor Leahy Podcast
Debating the existential risk of AI, with Connor Leahy

Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes

JOIN THE FIGHT, help Pause AI!!!!
Pause AI

Join the Pause AI Weekly Discord Thursdays at 3pm EST
/ discord
https://discord.com/invite/pVMWjddaW7

22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
https://www.safe.ai/work/statement-on-ai-risk
...

Episode #27 - “1800 Mile AGI Protest Road Trip” For Humanity: An AI Safety Podcast

May 8, 2024 2:13 pm

FULL INTERVIEW STARTS AT (00:09:57)

Please Donate Here To Help Promote This Show
https://www.paypal.com/paypalme/forhumanitypodcast

In episode #27, host John Sherman interviews Jon Dodd and Rev. Trevor Bingham of the World Pause Coalition about their recent road trip to San Francisco to protest outside the gates of OpenAI headquarters. A group of six people drove 1800 miles to be there. We hear firsthand what happens when OpenAI employees meet AI risk realists.

This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.

For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.

Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

JOIN THE PAUSE AI PROTEST MONDAY MAY 13TH
https://pauseai.info/2024-may

TIMESTAMPS:

The protest at OpenAI (00:00:00) Discussion on the non-violent protest at the OpenAI headquarters and the response from the employees.

Introduction to the AI Safety Podcast (00:01:58) The host introduces the podcast and its focus on the threat of human extinction from artificial intelligence.

Embracing safe AI and opposing frontier AI (00:03:31) Discussion on the principles of embracing safe AI while opposing dangerous frontier AI development.

The Road Trip to Protest (00:09:31) Description of the road trip to San Francisco for a protest at OpenAI, including a video of the protest and interactions with employees.

Formation of the World Pause Coalition (00:15:07) Introduction to the World Pause Coalition and its mission to raise awareness about AI and superintelligence.

Engaging the Public (00:16:46) Discussion on the public's response to flyers and interactions at the protest, highlighting the lack of knowledge about AI and superintelligence.

The smaller countries' stakes (00:22:53) Highlighting the importance of smaller countries' involvement in AI safety negotiations and protests.

San Francisco protest (00:25:29) Discussion about the experience and impact of the protest at the OpenAI headquarters in San Francisco.
Interactions with OpenAI workers

Importance of sustained protesting (00:35:42) Emphasizing the significance of continued and relentless protesting efforts in raising awareness and effecting change.

Creating relatable protests (00:37:19) Discussion on the importance of making protests relatable to everyday people and the potential impact of multi-generational and multicultural coalition efforts.

Different approaches to protesting (00:41:33) Exploration of peaceful protesting as the preferred approach, contrasting with more extreme methods used by other groups.

Emotional reach in AI safety advocacy (00:42:35) Emphasizing the need to emotionally connect with people to raise awareness about the dangers of AGI and the challenges of changing people's positions based on information alone.

Benefits of AI (00:45:15) Exploration of the benefits of AI in medical advancements and climate solutions, emphasizing the need to focus on these areas.

Suffering Risk (00:48:24) Exploring the concept of suffering risk associated with superintelligence and the potential dangers of AGI.

Religious Leaders' Role (00:52:39) Discussion on the potential role of religious leaders in raising awareness and mobilizing support for AI safety.

Future Protests and Media Strategy (00:59:04) Planning future protests, media outreach, and the need for a strong marketing mindset.

Personal Impact of AI Concerns (01:03:52) Reflection on the personal weight of understanding AI risks and maintaining hope for a positive outcome.

Finding Catharsis in Taking Action (01:08:12) How taking action to help feels cathartic and alleviates the weight of the issue.

Weighing the Impact on Future Generations (01:09:18) The heavy burden of concern for future generations and the motivation to act for their benefit.

Encouraging Participation in Protest (01:19:14) Participate in a protest and emphasizes the importance of individual action.

RESOURCES:

Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes

JOIN THE FIGHT, help Pause AI!!!!
Pause AI

Join the Pause AI Weekly Discord Thursdays at 2pm EST
/ discord
https://discord.com/invite/pVMWjddaW7

22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
https://www.safe.ai/work/statement-on-ai-risk
...

Episode #28 - “AI Safety = Emergency Preparedness” For Humanity: An AI Safety Podcast

May 15, 2024 3:09 pm

Episode #28 - “AI Safety = Emergency Preparedness” For Humanity: An AI Safety Podcast

Full Interview Starts At: (00:09:54)

Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast

BIG IDEA ALERT: This week’s show has something really big and really new. What if AI Safety didn’t have to carve out a new space in government–what if it could fit into already existing budgets. Emergency Preparedness–in the post 9-11 era–is a massively well funded area of federal and state government here in the US. There are agencies and organizations and big budgets already created to fund the prevention and recovery from disasters of all kinds, asteroids, pandemics, climate-related, terrorist-related, the list goes on an on.

This week’s guest, AI Policy Researcher Akash Wasil, has had more than 80 meetings with congressional staffers about AI existential risk. In Episode 28, he goes over his framing of AI Safety as Emergency Preparedness, the US vs. China race dynamic, and the vibes on Capitol Hill about AI risk. What does congress think of AI risk?

This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.

For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.

Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

JOIN THE PAUSE AI PROTEST MONDAY MAY 13TH
https://pauseai.info/2024-may

TIMESTAMPS:

**Emergency Preparedness in AI (00:00:00)**
**Introduction to the Podcast (00:02:49)**
**Discussion on AI Risk and Disinformation (00:06:27)**
**Engagement with Lawmakers and Policy Development (00:09:54)**
**Control AI's Role in AI Risk Awareness (00:19:00)**
**Engaging with congressional offices (00:25:00)**
**Establishing AI emergency preparedness office (00:32:35)**
**Congressional focus on AI competitiveness (00:37:55)**
**Expert opinions on AI risks (00:40:38)**
**Commerce vs. national security (00:42:41)**
**US AI Safety Institute's placement (00:46:33)**
**Expert concerns and raising awareness (00:50:34)**
**Influence of protests on policy (00:57:00)**
**Public opinion on AI regulation (01:02:00)**
**Silicon Valley Culture vs. DC Culture (01:05:44)**
**International Cooperation and Red Lines (01:12:34)**
**Eliminating Race Dynamics in AI Development (01:19:56)**
**Government Involvement for AI Development (01:22:16)**
**Compute-Based Licensing Proposal (01:24:18)**
**AI Safety as Emergency Preparedness (01:27:43)**
**Closing Remarks (01:29:09)**

RESOURCES:

JOIN THE FIGHT, help Pause AI!!!!
Pause AI

Join the Pause AI Weekly Discord Thursdays at 2pm EST
/ discord
https://discord.com/invite/pVMWjddaW7

22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
https://www.safe.ai/work/statement-on-ai-risk

Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes
...

Episode #30 - “Dangerous Days At Open AI” For Humanity: An AI Risk Podcast

May 29, 2024 3:50 pm

Full Interview starts at (00:05:01)

Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast

In episode 30, John Sherman interviews Professor Olle Häggström on a wide range of AI risk topics. At the top of the list is the super-instability and the super-exodus from OpenAI’s super alignment team following the resignations of Jan Lieke and Ilya Sutskyver.
Olle HäggströmOlle Häggström
This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.

For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.

Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

Timestamps
1. **(00:00:00)** AI Safety Competence and Alignment Team Necessity at OpenAI
2. **(00:02:56)** New Podcast Title and Music Unveiled
3. **(00:03:56)** Host Acknowledges Positive Feedback from Professor Hagstrom
4. **(00:05:05)** Host Excited by Professor Hagstrom's Show Familiarity
5. **(00:06:44)** Key Departures and Developments at OpenAI
6. **(00:12:54)** Public Awareness Trajectory of AI Safety
7. **(00:14:17)** Super Alignment and AI Safety Research Funding
8. **(00:26:27)** Sam Altman's Character Post-Scarlett Johansson Incident
9. **(00:28:11)** Recalling Sam Altman's Candor on AI Risks
10. **(00:28:50)** The Corrupting Influence of Commercial Incentives on Individuals
11. **(00:30:13)** The Need for Oversight to Prevent Corruption
12. **(00:31:03)** Government Intervention in AI Development Urged
13. **(00:32:54)** US Leadership in AI Safety and Regulation
14. **(00:35:58)** Evaluating the Impact of AI Regulation
15. **(00:36:19)** Concerns Over Political Understanding of AI Risks



RESOURCES:

JOIN THE FIGHT, help Pause AI!!!!
Pause AI

Join the Pause AI Weekly Discord Thursdays at 2pm EST
/ discord
https://discord.com/invite/pVMWjddaW7

22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
https://www.safe.ai/work/statement-on-ai-risk

Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes
...

Episode #31 - “Trucker vs. AGI” For Humanity: An AI Risk Podcast

June 5, 2024 3:16 pm

In Episode #31 John Sherman interviews a 29-year-old American truck driver about his concerns over human extinction and artificial intelligence. They discuss the urgency of raising awareness about AI risks, the potential job displacement in industries like trucking, and the geopolitical implications of AI advancements. Leighton shares his plans to start a podcast and possibly use filmmaking to engage the public in AI safety discussions. Despite skepticism from others, they stress the importance of community and dialogue in understanding and mitigating AI threats, with Leighton highlighting the risk of a "singleton event" and ethical concerns in AI development.

Full Interview Starts at (00:10:18)

Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast

This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.

For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.

Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

Timestamps

- Layton's Introduction (00:00:00)
- Introduction to the Podcast (00:02:19)
- Power of the First Followers (00:03:24)
- Layton's Concerns about AI (00:08:49)
- Layton's Background and AI Awareness (00:11:11)
- Challenges in Spreading Awareness (00:14:18)
- Distrust of Government and Family Involvement (00:23:20)
- Government Imperfections (00:25:39)
- AI Impact on National Security (00:26:45)
- AGI Decision-Making (00:28:14)
- Government Oversight of AGI (00:29:32)
- Geopolitical Tension and AI (00:31:51)
- Job Loss and AGI (00:37:20)
- AI, Mining, and Space Race (00:38:02)
- Public Engagement and AI (00:44:34)
- Philosophical Perspective on AI (00:49:45)
- The existential threat of AI (00:51:05)
- Geopolitical tensions and AI risks (00:52:05)
- AI's potential for global dominance (00:53:48)
- Ethical concerns and AI welfare (01:01:21)
- Preparing for AI risks (01:03:02)
- The challenge of raising awareness (01:06:42)
- A hopeful outlook (01:08:28)

RESOURCES:

Leighton’s Podcast on YouTube:

https://www.youtube.com/@UrNotEvenBasedBro

JOIN THE FIGHT, help Pause AI!!!!
Pause AI

Join the Pause AI Weekly Discord Thursdays at 2pm EST
/ discord
https://discord.com/invite/pVMWjddaW7

22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
https://www.safe.ai/work/statement-on-ai-risk

Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes
...

Episode #32 - “Humans+AGI=Harmony?” For Humanity: An AI Risk Podcast

June 12, 2024 9:39 am

Could humans and AGIs live in a state of mutual symbiosis, like the ecostsystem of a coral reef?

(FULL INTERVIEW STARTS AT 00:23:21)

Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast

In episode 32, host John Sherman interviews BioComm AI CEO Peter Jensen. Peter is working on a number of AI-risk related projects. He believes it’s possible humans and AGIs can co-exist in mutual symbiosis.

This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.

For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.

Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.


RESOURCES:

BUY STEPHEN HANSON’S BEAUTIFUL BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom

NYT: OpenAI Insiders Warn of a ‘Reckless’ Race for Dominance
https://www.nytimes.com/2024/06/04/technology/openai-culture-whistleblowers.html?unlocked_article_code=1.xE0._mTr.aNO4f_hEp2J4&smid=nytcore-ios-share&referringSource=articleShare&sgrp=c-cb

Dwarkesh Patel Interviews Another Whistleblower
Leopold Aschenbrenner - 2027 AGI, China/US Super-Intelligence Race, & The Return of History

Roman Yampolskiy on Lex Fridman
Roman Yampolskiy: Dangers of Superintelligent AI | Lex Fridman Podcast #431

Gladstone AI on Joe Rogan
Joe Rogan Experience #2156 - Jeremie & Edouard Harris

Peter Jenson’s Videos:
HOW can AI Kill-us-All? So Simple, Even a Child can Understand (1:25)

WHY do we want AI? For our Humanity (1:00)

WHAT is the BIG Problem? Wanted: SafeAI Forever (3:00)

FIRST do no harm. (Safe AI Blog)

DECK. On For Humanity Podcast “Just the FACTS, please. WHY? WHAT? HOW?” (flip book)

https://discover.safeaiforever.com/

JOIN THE FIGHT, help Pause AI!!!!
Pause AI

Join the Pause AI Weekly Discord Thursdays at 2pm EST
/ discord
https://discord.com/invite/pVMWjddaW7

22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
https://www.safe.ai/work/statement-on-ai-risk

Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes

TIMESTAMPS:

**The release of products that are safe (00:00:00)**

**Breakthroughs in AI research (00:00:41)**

**OpenAI whistleblower concerns (00:01:17)**

**Roman Yampolskiy's appearance on Lex Fridman podcast (00:02:27)**

**The capabilities and risks of AI systems (00:03:35)**

**Interview with Gladstone AI founders on Joe Rogan podcast (00:08:29)**

**OpenAI whistleblower's interview on Hard Fork podcast (00:14:08)**

**Peter Jensen's work on AI risk and media communication (00:20:01)**

**The interview with Peter Jensen (00:22:49)**

**Mutualistic Symbiosis and AI Containment (00:31:30)**

**The Probability of Catastrophic Outcome from AI (00:33:48)**

**The AI Safety Institute and Regulatory Efforts (00:42:18)**

**Regulatory Compliance and the Need for Safety (00:47:12)**

**The hard compute cap and hardware adjustment (00:47:47)**

**Physical containment and regulatory oversight (00:48:29)**

**Viewing the issue as a big business regulatory issue vs. a national security issue (00:50:18)**

**Funding and science for AI safety (00:49:59)**

**OpenAI's power allocation and ethical concerns (00:51:44)**

**Concerns about AI's impact on employment and societal well-being (00:53:12)**

**Parental instinct and the urgency of AI safety (00:56:32)**
...

AI Parenting Tools | Episode #33

June 19, 2024 1:02 pm

Thank you for tuning in to our video discussing AI for dads and what is the impact of AI on fatherhood.

In this video, we delve into the question: What is the impact of AI on fatherhood? Join us as we guide you through this important topic.

- Parenting with AI
- Best parenting books for dads
- Free online parenting classes for dads

*************************

In Episode #33, host John Sherman talks with Dustin Burnham, who is a dad, an anesthetist, an AI risk realist, and a podcast host himself. His show is about about being a dad. (https://www.youtube.com/@thepresentfathers)

(Full Interview starts at 00:11:31)

Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast

This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.

For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.

Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

*************************

RESOURCES:

Check out Dustin Burham’s fatherhood podcast: https://www.youtube.com/@thepresentfathers

For Humanity Theme Music by Josef Ebner
Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg
Website: https://josef.pictures

BUY STEPHEN HANSON’S BEAUTIFUL BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom

JOIN THE FIGHT, help Pause AI!!!!
Pause AI

Join the Pause AI Weekly Discord Thursdays at 2pm EST
/ discord
https://discord.com/invite/pVMWjddaW7

22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
https://www.safe.ai/work/statement-on-ai-risk

Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes

*************************

TIMESTAMPS

**The threat of AI to humanity (00:00:22)**

**Pope Francis's address at the G7 summit on AI risk (00:02:31)**

**Starting a dialogue on tough subjects (00:05:44)**

**The challenges and joys of fatherhood (00:10:47)**

**Concerns and excitement about AI technology (00:15:09)**

**The Present Fathers Podcast (00:16:58)**

**Personal experiences of fatherhood (00:18:56)**

**The impact of AI risk on future generations (00:21:11)**

**Elon Musk's Concerns (00:21:57)**

**Impact of Denial (00:23:40)**

**Potential AI Risks (00:24:27)**

**Psychopathy and Decision-Making (00:26:28)**

**Personal and Societal Impact (00:28:46)**

**AI Risk Awareness (00:30:12)**

**Ethical Considerations (00:31:46)**

**AI Technology and Human Impact (00:34:28)**

**Exponential Growth and Risk (00:36:06)**

**Emotion and Empathy in AI (00:37:58)**

**Antenatalism and Ethical Debate (00:41:04)**

**The antenatal ideas (00:42:20)**

**Psychopathic tendencies among CEOs and decision making (00:43:27)**

**The power of social media in influencing change (00:46:12)**

**The unprecedented threat of human extinction from AI (00:49:03)**

**Teaching large language models to love humanity (00:50:11)**

**Proposed measures for AI regulation (00:59:27)**

**China's approach to AI safety regulations (01:01:12)**

**The threat of open sourcing AI (01:02:50)**

**Protecting children from AI temptations (01:04:26)**

**Challenges of policing AI-generated content (01:07:06)**

**Hope for the future and engaging in AI safety (01:10:33)**

**Performance by YG Marley and Lauryn Hill (01:14:26)**

**Final thoughts and call to action (01:22:28)**

*************************

Discover more of our video content on what is the impact of AI on fatherhood. You'll find additional insights on this topic along with relevant social media links.

YouTube: https://www.youtube.com/@ForHumanityPodcast
Website: http://www.storyfarm.com/

***************************

This video explores what is the impact of AI on fatherhood, parenting with AI, best parenting books for dads, and free online parenting classes for dads.

Have I addressed your curiosity regarding what is the impact of AI on fatherhood?

We eagerly await your feedback and insights. Please drop a comment below, sharing your thoughts, queries, or suggestions about: parenting with AI, best parenting books for dads, free online parenting classes for dads, and what is the impact of AI on fatherhood?
...

Episode #34 - “The Threat of AI Autonomous Replication” For Humanity: An AI Risk Podcast

June 26, 2024 1:44 pm

In Episode #34, host John Sherman talks with Charbel-Raphaël Segerie, Executive Director, Centre pour la sécurité de l'IA. Among the very important topics covered: autonomous AI self replication, the potential for warning shots to go unnoticed due to a public and journalist class that are uneducated on AI risk, and the potential for a disastrous Yan Lecunnification of the upcoming February 2025 Paris AI Safety Summit.

Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast

This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.

For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.

Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

For Humanity Theme Music by Josef Ebner
Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg
Website: https://josef.pictures

RESOURCES:

Charbel-Raphaël Segerie’s Less Wrong Writing, much more on many topics we covered!
https://www.lesswrong.com/users/charbel-raphael

BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom

JOIN THE FIGHT, help Pause AI!!!!
Pause AI

Join the Pause AI Weekly Discord Thursdays at 2pm EST
/ discord
https://discord.com/invite/pVMWjddaW7

22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
https://www.safe.ai/work/statement-on-ai-risk

Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes

TIMESTAMPS:
**The threat of AI autonomous replication (00:00:43)**

**Introduction to France's Center for AI Security (00:01:23)**

**Challenges in AI risk awareness in France (00:09:36)**

**The influence of Yann LeCun on AI risk perception in France (00:12:53)**

**Autonomous replication and adaptation of AI (00:15:25)**

**The potential impact of autonomous replication (00:27:24)**

**The dead internet scenario (00:27:38)**

**The potential existential threat (00:29:02)**

**Fast takeoff scenario (00:30:54)**

**Dangers of autonomous replication and adaptation (00:34:39)**

**Difficulty in recognizing warning shots (00:40:00)**

**Defining red lines for AI development (00:42:44)**

**Effective education strategies (00:46:36)**

**Impact on computer science students (00:51:27)**

**AI safety summit in Paris (00:53:53)**

**The summit and AI safety report (00:55:02)**

**Potential impact of key figures (00:56:24)**

**Political influence on AI risk (00:57:32)**

**Accelerationism in political context (01:00:37)**

**Optimism and hope for the future (01:04:25)**

**Chances of a meaningful pause (01:08:43)**
...

Episode #35 “The AI Risk Investigators: Inside Gladstone AI Part 1” For Humanity:An AI Risk Podcast

July 3, 2024 2:28 pm

In Episode #35 host John Sherman talks with Jeremie and Edouard Harris, CEO and CTO of Gladstone AI. Gladstone AI is the private company working most closely with the US government on assessing AI risk. The Gladstone Report, published in February, was the first public acknowledgment of AI risk reality by the US government in any way. These are two very important people doing incredibly important work. The full interview lasts more than 2 hours and will be broken into two shows.

Gladstone AI Action Plan
https://www.gladstone.ai/action-plan

TIME MAGAZINE ON THE GLADSTONE REPORT
https://time.com/6898967/ai-extinction-national-security-risks-report/

SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
https://www.youtube.com/@DoomDebates

Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast

This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.

For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.

Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

For Humanity Theme Music by Josef Ebner
Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg
Website: https://josef.pictures

RESOURCES:

BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom

JOIN THE FIGHT, help Pause AI!!!!
Pause AI

Join the Pause AI Weekly Discord Thursdays at 2pm EST
/ discord
https://discord.com/invite/pVMWjddaW7

22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
https://www.safe.ai/work/statement-on-ai-risk

Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes

TIMESTAMPS:

Sincerity and Sam Altman (00:00:00) Discussion on the perceived sincerity of Sam Altman and his actions, including insights into his character and motivations.

Introduction to Gladstone AI (00:01:14) Introduction to Gladstone AI, its involvement with the US government on AI risk, and the purpose of the podcast episode.

Doom Debates on YouTube (00:02:17)

YC Experience and Sincerity in Startups (00:08:13) Insight into the Y Combinator (YC) experience and the emphasis on sincerity in startups, with personal experiences and observations shared.

OpenAI and Sincerity (00:11:51) Exploration of sincerity in relation to OpenAI, including evaluations of the company's mission, actions, and the challenges it faces in the AI landscape.

The scaling story (00:21:33) Discussion of the scaling story related to AI capabilities and the impact of increasing data, processing power, and training models.

The call about GPT-3 (00:22:29) Edward Harris receiving a call about the scaling story and the significance of GPT-3's capabilities, leading to a decision to focus on AI development.

Transition from Y Combinator (00:24:42) Jeremy and Edward Harris leaving their previous company and transitioning from Y Combinator to focus on AI development.

Security concerns and exfiltration (00:31:35) Discussion about the security vulnerabilities and potential exfiltration of AI models from top labs, highlighting the inadequacy of security measures.

Government intervention and security (00:38:18) Exploration of the potential for government involvement in providing security assets

Resource reallocation for safety and security (00:40:03) Discussion about the need to reallocate resources for safety, security, and alignment technology

OpenAI's computational resource allocation (00:42:10) Concerns about OpenAI's failure to allocate computational resources for safety

China's Strategic Moves (00:43:07) Discussion on potential aggressive actions by China to prevent a permanent disadvantage in AI technology.

China's Sincerity in AI Safety (00:44:29) Debate on the sincerity of China's commitment to AI safety and the influence of the CCP.

Taiwan Semiconductor Manufacturing Company (TSMC) (00:47:47)

US and China's Power Constraints (00:51:30) Comparison of the constraints faced by the US and China in terms of advanced chips and grid power.

Nuclear Power and Renewable Energy (00:52:23) Discussion on the power sources being pursued by China

Future Scenarios (00:56:20) Exploration of potential outcomes if China overtakes the US in AI technology.
...

Episode #36 “The AI Risk Investigators: Inside Gladstone AI, Part 2” For Humanity Podcast

July 10, 2024 4:49 pm

In Episode #36, host John Sherman talks with Jeremie and Edouard Harris, CEO and CTO of Gladstone AI. Gladstone AI is the private company working most closely with the US government on assessing AI risk. The Gladstone Report, published in February, was the first public acknowledgment of AI risk reality by the US government in any way. These are two very important people doing incredibly important work. The full interview lasts more than 2 hours and will be broken into two shows, this the second of the two.

Gladstone AI Action Plan
https://www.gladstone.ai/action-plan

TIME MAGAZINE ON THE GLADSTONE REPORT
https://time.com/6898967/ai-extinction-national-security-risks-report/

SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
https://www.youtube.com/@DoomDebates

Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast

This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.

For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.

Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

For Humanity Theme Music by Josef Ebner
Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg
Website: https://josef.pictures

RESOURCES:

BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom

JOIN THE FIGHT, help Pause AI!!!!
Pause AI

Join the Pause AI Weekly Discord Thursdays at 2pm EST
/ discord
https://discord.com/invite/pVMWjddaW7

22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
https://www.safe.ai/work/statement-on-ai-risk

Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes

TIMESTAMPS:

**The whistleblower's concerns (00:00:00)**

**Introduction to the podcast (00:01:09)**

**The urgency of addressing AI risk (00:02:18)**

**The potential consequences of falling behind in AI (00:04:36)**

**Transitioning to working on AI risk (00:06:33)**

**Engagement with the State Department (00:08:07)**

**Project assessment and public visibility (00:10:10)**

**Motivation for taking on the detective work (00:13:16)**

**Alignment with the government's safety culture (00:17:03)**

**Potential government oversight of AI labs (00:20:50)**

**The whistle blowers' concerns (00:21:52)**

**Shifting control to the government (00:22:47)**

**Elite group within the government (00:24:12)**

**Government competence and allocation of resources (00:25:34)**

**Political level and tech expertise (00:27:58)**

**Challenges in government engagement (00:29:41)**

**State department's engagement and assessment (00:31:33)**

**Recognition of government competence (00:34:36)**

**Engagement with frontier labs (00:35:04)**

**Whistleblower insights and concerns (00:37:33)**

**Whistleblower motivations (00:41:58)**

**Engagements with AI Labs (00:42:54)**

**Emotional Impact of the Work (00:43:49)**

**Workshop with Government Officials (00:44:46)**

**Challenges in Policy Implementation (00:45:46)**

**Expertise and Insights (00:49:11)**

**Future Engagement with US Government (00:50:51)**

**Flexibility of Private Sector Entity (00:52:57)**

**Impact on Whistleblowing Culture (00:55:23)**

**Key Recommendations (00:57:03)**

**Security and Governance of AI Technology (01:00:11)**

**Obstacles and Timing in Hardware Development (01:04:26)**

**The AI Lab Security Measures (01:04:50)**

**Nvidia's Stance on Regulations (01:05:44)**

**Export Controls and Governance Failures (01:07:26)**

**Concerns about AGI and Alignment (01:13:16)**

**Implications for Future Generations (01:16:33)**

**Personal Transformation and Mental Health (01:19:23)**

**Starting a Nonprofit for AI Risk Awareness (01:21:51)**
...

Episode #39 “Did AI-Risk Just Get Partisan?” For Humanity: An AI Risk Podcast

July 31, 2024 1:12 pm

In Episode #39, host John Sherman talks with Matthew Taber, Founder, advocate and expert in AI-risk legislation. The conversation starts ut with the various state AI laws that are coming up and moves into the shifting political landscape around AI-risk legislation in America in July 2024.

Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast

EMAIL JOHN: [email protected]

This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.

For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.

Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

RESOURCES:

Max Winga’s “A Stark Warning About Extiction”
https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22

For Humanity Theme Music by Josef Ebner
Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg
Website: https://josef.pictures

SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
https://www.youtube.com/@DoomDebates

BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom

JOIN THE FIGHT, help Pause AI!!!!
Pause AI

Join the Pause AI Weekly Discord Thursdays at 2pm EST
/ discord
https://discord.com/invite/pVMWjddaW7

22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
https://www.safe.ai/work/statement-on-ai-risk

Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes

Timestamps
**Introduction to Political Developments (00:00:00)**

**GOP's AI Regulation Stance (00:00:41)**

**Welcome to Episode 39 (00:01:41)**

**Trump's Assassination Attempt (00:03:41)**

**Partisan Shift in AI Risk (00:04:09)**

**Matthew Tabor's Background (00:06:32)**

**Tennessee's "ELVIS" Law (00:13:55)**

**Bipartisan Support for ELVIS (00:15:49)**

**California's Legislative Actions (00:18:58)**

**Overview of California Bills (00:20:50)**

**Lobbying Influence in California (00:23:15)**

**Challenges of AI Training Data (00:24:26)**

**The Original Sin of AI (00:25:19)**

**The Need for Guardrails (00:26:33)**

**Congress and AI Regulation (00:27:29)**

**Investigations into AI Companies (00:28:48)**

**The New York Times Lawsuit (00:29:39)**

**Political Developments in AI Risk (00:30:24)**

**GOP Platform and AI Regulation (00:31:35)**

**Local vs. National AI Regulation (00:32:58)**

**Public Awareness of AI Regulation (00:33:38)**

**Mobilizing Public Support (00:34:24)**

**The Role of Moneyed Interests (00:35:00)**

**California AI Bills and Industry Reaction (00:36:09)**

**Elon Musk's Claims on California Tech (00:38:03)**

**Legislation and Guardrails (00:39:00)**

**Understanding of AI Risk Among Lawmakers (00:40:00)**

**Engaging with Lawmakers (00:41:05)**

**Roleplay Demonstration (00:43:48)**

**Legislative Frameworks for AI (00:46:20)**

**Coalition Against AI Development (00:49:28)**

**Understanding AI Risks in Hollywood (00:51:00)**

**Generative AI in Film Production (00:53:32)**

**Legislative Awareness of AI Issues (00:55:08)**

**Impact of AI on Authenticity in Entertainment (00:56:14)**

**The Future of AI-Generated Content (00:57:31)**

**AI Legislation and Political Dynamics (01:00:43)**

**Partisan Issues in AI Regulation (01:02:22)**

**Influence of Celebrity Advocacy on AI Legislation (01:04:11)**

**Understanding Legislative Processes for AI Bills (01:09:23)**

**Presidential Approach to AI Regulation (01:11:47)**

**State-Level Initiatives for AI Legislation (01:14:09)**

# Podcast Episode Timestamps

**State vs. Congressional Regulation (01:15:05)**

**Engaging Lawmakers (01:15:29)**

**YouTube Video Views Explanation (01:15:37)**

**Algorithm Challenges (01:16:48)**

**Celebration of Life (01:18:08)**

**Final Thoughts and Call to Action (01:19:13)**
...

Episode #40 “Surviving Doom” For Humanity: An AI Risk Podcast

August 7, 2024 4:08 pm

In Episode #40, host John Sherman talks with James Norris, CEO of Upgradable and longtime AI safety proponent. James has been concerned about AI x-risk for 26 years. He lives now in Bali and has become an expert in prepping for a very different world post-warning shot or other major AI-related disaster, and he’s helping others do the same. James shares his powerful insight, long-time awareness, and expertise helping others find a way to survive and rebuild from a post-AGI disaster warning shot.

FULL INTERVIEW STARTS AT **00:04:47**

Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast

EMAIL JOHN: [email protected]

This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.

For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.

Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

RESOURCES:

Max Winga’s “A Stark Warning About Extinction”
https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22

For Humanity Theme Music by Josef Ebner
Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg
Website: https://josef.pictures

SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
https://www.youtube.com/@DoomDebates

BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom

JOIN THE FIGHT, help Pause AI!!!!
Pause AI

Join the Pause AI Weekly Discord Thursdays at 2pm EST
/ discord
https://discord.com/invite/pVMWjddaW7

22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
https://www.safe.ai/work/statement-on-ai-risk

Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes

Timestamps
### **Relevance to AGI (00:05:05)**
### **Nuclear Threats and Survival (00:05:34)**
### **Introduction to the Podcast (00:06:18)**
### **Open Source AI Discussion (00:09:28)**
### **James's Background and Location (00:11:00)**
### **Prepping and Quality of Life (00:13:12)**
### **Creating Spaces for Preparation (00:13:48)**
### **Survival Odds and Nuclear Risks (00:21:12)**
### **Long-Term Considerations (00:22:59)**
### **The Warning Shot Discussion (00:24:21)**
### **The Need for Preparation (00:27:38)**
### **Planning for Population Centers (00:28:46)**
### **Likelihood of Extinction (00:29:24)**
### **Basic Preparedness Steps (00:30:04)**
### **Natural Disaster Preparedness (00:32:15)**
### **Timeline for Change (00:32:58)**
### **Predictions for AI Breakthroughs (00:34:08)**
### **Human Nature and Future Risks (00:37:06)**
### **Societal Influences on Behavior (00:40:00)**
### **Living Off-Grid (00:43:04)**
### **Conformity Bias in Humanity (00:46:38)**
### **Planting Seeds of Change (00:48:01)**
### **The Evolution of Human Reasoning (00:48:22)**
### **Looking Back to 1998 (00:48:52)**
### **Emergency Preparedness Work (00:52:19)**
### **The Shift to Effective Altruism (00:53:22)**
### **The AI Safety Movement (00:54:24)**
### **The Challenge of Public Awareness (00:55:40)**
### **The Historical Context of AI Discussions (00:57:01)**
### **The Role of Effective Altruism (00:58:11)**
### **Barriers to Knowledge Spread (00:59:22)**
### **The Future of AI Risk Advocacy (01:01:17)**
### **Shifts in Mindset Over 26 Years (01:03:27)**
### **The Impact of Youthful Optimism (01:04:37)**
### **Disillusionment with Altruism (01:05:37)**
### **Short Timelines and Urgency (01:07:48)**
### **Human Nature and AI Development (01:08:49)**
### **The Risks of AI Leadership (01:09:16)**
### **Public Reaction to AI Risks (01:10:22)**
### **Consequences for AI Researchers (01:11:18)**
### **Contradictions of Abundance (01:11:42)**
### **Personal Safety in a Risky World (01:12:40)**
### **Assassination Risks for Powerful Figures (01:13:41)**
### **Future Governance Challenges (01:14:44)**
### **Distribution of AI Benefits (01:16:12)**
### **Ethics and AI Development (01:18:11)**
### **Moral Obligations to Non-Humans (01:19:02)**
### **Utopian Futures and AI (01:21:16)**
### **Varied Human Values (01:22:29)**
### **International Cooperation on AI (01:27:57)**
### **Hope Amidst Uncertainty (01:31:14)**
### **Resilience in Crisis (01:31:32)**
### **Building Safe Zones (01:32:18)**
### **Urgency for Action (01:33:06)**
### **Doomsday Prepping Reflections (01:33:56)**
### **Celebration of Life (01:35:07)**
...

Episode #41 “David Brooks: Dead Wrong on AI” For Humanity: An AI Risk Podcast

August 14, 2024 1:26 pm

Episode #41 “David Brooks: Dead Wrong on AI” For Humanity: An AI Risk Podcast

In Episode #41, host John Shermanbegins with a personal message to David Brooks of the New York Times. Brooks wrote an article titled “Many People Fear AI: They Shouldn’t”–and in full candor it pissed John off quite much. During this episode, John and Doom Debates host Liron Shapira go line by line through David Brooks’s 7/31/24 piece in the New York Times.

EMAIL THIS SHOW TO DAVID BROOKS: [email protected]

Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast

EMAIL JOHN: [email protected]

This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.

For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.

Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

RESOURCES:

Max Winga’s “A Stark Warning About Extiction”
https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22

For Humanity Theme Music by Josef Ebner
Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg
Website: https://josef.pictures

SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
https://www.youtube.com/@DoomDebates

BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom

JOIN THE FIGHT, help Pause AI!!!!
Pause AI

Join the Pause AI Weekly Discord Thursdays at 2pm EST
/ discord
https://discord.com/invite/pVMWjddaW7

22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
https://www.safe.ai/work/statement-on-ai-risk

Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes
...

How Is AI Impacting The Film Industry? | Actors VS. AI Episode #42

August 21, 2024 3:14 pm

Thank you for tuning in to our video discussing actors vs. AI and how is AI impacting the film industry.

In this video, we delve into the question: How is AI impacting the film industry? Join us as we guide you through this important topic.

- SAG-AFTRA
- What is the SAG-AFTRA contract with AI?
- Impact of AI on Hollywood

*************************

In Episode #42, host John Sherman talks with actor Erik Passoja about AI’s impact on Hollywood, the fight to protect people’s digital identities, and the vibes in LA about existential risk.

Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast

EMAIL JOHN: [email protected]

This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.

For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.

Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

RESOURCES:

JOIN THE FIGHT, help Pause AI!!!!
Pause AI

Join the Pause AI Weekly Discord Thursdays at 2pm EST
/ discord
https://discord.com/invite/pVMWjddaW7

Max Winga’s “A Stark Warning About Extiction”
https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22

For Humanity Theme Music by Josef Ebner
Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg
Website: https://josef.pictures

SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
https://www.youtube.com/@DoomDebates

BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom


22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
https://www.safe.ai/work/statement-on-ai-risk

Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes

*************************

Discover more of our video content on how is AI impacting the film industry. You'll find additional insights on this topic along with relevant social media links.

YouTube: https://www.youtube.com/@ForHumanityPodcast
Website: http://www.storyfarm.com/

***************************

This video explores how is AI impacting the film industry, SAG-AFTRA, what is the SAG-AFTRA contract with AI, and impact of AI on Hollywood.

Have I addressed your curiosity regarding how is AI impacting the film industry?

We eagerly await your feedback and insights. Please drop a comment below, sharing your thoughts, queries, or suggestions about: SAG-AFTRA, what is the SAG-AFTRA contract with AI, impact of AI on Hollywood, and how is AI impacting the film industry?
...

Episode #43: “So what exactly is the good case for AI?” For Humanity: An AI Risk Podcast

August 28, 2024 6:35 pm

In Episode #43, host John Sherman talks with DevOps Engineer Aubrey Blackburn about the vague, elusive case the big AI companies and accelerationists make for the good case AI future.

LEARN HOW TO HELP RAISE AI RISK AWARENESS IN YOUR COMMUNITY HERE
https://pauseai.info/local-organizing

Please Donate Here To Help Promote For Humanity
https://www.paypal.com/paypalme/forhumanitypodcast

EMAIL JOHN: [email protected]

This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.

For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI.

Peabody Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2 years. This podcast is solely about the threat of human extinction from AGI. We’ll meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

RESOURCES:

JOIN THE FIGHT, help Pause AI!!!!
Pause AI

Join the Pause AI Weekly Discord Thursdays at 2pm EST
/ discord
https://discord.com/invite/pVMWjddaW7

Max Winga’s “A Stark Warning About Extiction”
https://youtu.be/kDcPW5WtD58?si=i6IRy82xZ2PUOp22

For Humanity Theme Music by Josef Ebner
Youtube: https://www.youtube.com/channel/UCveruX8E-Il5A9VMC-N4vlg
Website: https://josef.pictures

SUBSCRIBE TO LIRON SHAPIRA’S DOOM DEBATES on YOUTUBE!!
https://www.youtube.com/@DoomDebates

BUY STEPHEN HANSON’S BEAUTIFUL AI RISK BOOK!!!
https://stephenhansonart.bigcartel.com/product/the-entity-i-couldn-t-fathom


22 Word Statement from Center for AI Safety
Statement on AI Risk | CAIS
https://www.safe.ai/work/statement-on-ai-risk

Best Account on Twitter: AI Notkilleveryoneism Memes
https://twitter.com/AISafetyMemes
...

Interviews and Talks

Industry Leaders and Notable Public Figures

Explainers

Learn about the issue by some of the best explainers out there

Lethal Intelligence Microblog

Blow your mind with the latest stories

Favorite Microbloggers

Receive important updates!

Your email will not be shared with anyone and won’t be used for any reason besides notifying you when we have important updates or new content

×