OpenAI

OpenAI is the company behind Chat-GPT and GPT-4, leading the frontier AI race and arguably one of the most likely contenders, if not the most likely, to bring AGI to the world.

Initially founded as a non-profit in December 2015 as a counterweight to Google-Deepmind (which at the point was the only serious player on the AI front) but quickly evolved into a closed, for-profit money-making machine, racing to soon become the most capital heavy corporate on the planet (there are talks about raising $7 trillion and building its own nuclear plants).
Elon Musk who was a co-founder of its “Open, non-profit version”, famously said about the topic: “This would be like, let’s say you funded an organization to save the Amazon rainforest, and instead they became a lumber company, and chopped down the forest and sold it for money.”

The security setup of the company has been frequently referred to as Swiss cheese, repeatedly hacked by foreign agents and the general public getting informed months later. It’s been claimed by several employees, that by now, countries like China and Russia have more information and data about the frontier work done there than the US gov itself.

Most alarmingly, the culture has shifted away from the early days safety mindset, where half the company was working with existential risk on the top of the agenda, towards a short-term gains, capitalist focus, where the people who care are either gone or too scared to speak publicly and are forced to go through whistleblower channels to voice their concerns.

The selective pressure towards the wrong priorities has been very significant, as there have been multiple waves of resignations of the best Alignment and Safety scientists. The world’s top scientific talent working on the Alignment problem has left the company as they were not given the resources needed, the environment was toxic, and the race dynamics were putting all the pressure towards reckless acceleration.

What The Ex-OpenAI Safety Employees Are Worried About

July 3, 2024 4:42 pm

William Saunders is an ex-OpenAI Superallignment team member. Lawrence Lessig is a professor of Law and Leadership at Harvard Law School. The two come on to discuss what's troubling ex-OpenAI safety team members.

Listen to the full episode on Big Technology Podcast

Spotify:  https://spoti.fi/32aZGZx
Apple:  https://apple.co/3AebxCK
Etc. https://pod.link/1522960417/


We discuss whether the Saudners' former team saw something secret and damning inside OpenAI, or whether it was a general cultural issue. And then, we talk about the 'Right to Warn' a policy that would give AI insiders a right to share concerning developments with third parties without fear of reprisal. Tune in for a revealing look into the eye of a storm brewing in the AI community.

Sound Bites

"I thought that this would mean that they would prioritize putting safety first. But over time, it started to really feel like the decisions being made by leadership were more like the white star line building the Titanic."
"I'm more afraid that like GPT-5 or GPT-6 or GPT-7 might be the Titanic in this analogy."
"The disturbing part of this was that the only response was reprimanding the person who raised these concerns."
"You need to have external review as well."
"Would you imagine it's enough just to have something like the SEC as a way to complain about this?"
"Ideally, you can just like call up somebody at the government and say like, hey, I think this might be going like a little bit wrong. What do you think about it?"


Chapters:

00:00 Introduction and Overview
01:03 Concerns about OpenAI's Trajectory and Prioritization of Product Development
09:59 Government Oversight and the Role of Regulatory Agencies in AI
11:36 The Challenges of Whistleblowing in the AI Industry
29:09 The Importance of External Review and Oversight
30:02 Whistleblowing and the Role of the SEC
31:57 The Need for Legislation
39:12 Accountability, Transparency, and a Culture of Safety
49:38 Assessing the Potential Risks and Timeline
52:50 Taking Proactive Measures to Ensure AI Safety
...

OpenAI Whistleblower Speaks | Interview

June 7, 2024 9:33 pm

The OpenAI whistle-blower Daniel Kokotajlo speaks in one of his first public interviews about why he risked almost $2 million in equity to warn of what he calls the reckless culture inside that company.

This is a clip from episode 86 https://youtu.be/9mMEL7ShOmw

Additional Reading:

OpenAI Insiders Warn of a ‘Reckless’ Race for Dominance
https://www.nytimes.com/2024/06/04/technology/openai-culture-whistleblowers.html


What Aren’t The OpenAI Whistle-Blowers Saying?
https://www.platformer.news/tuesday-newsletter/


The Opaque Investment Empire Making OpenAI’s Sam Altman Rich
https://www.wsj.com/tech/ai/openai-sam-altman-investments-004fc785



Hard Fork is a weekly look into the future that's already here. Hosts Kevin Roose and Casey Newton explore stories from the bleeding edge of tech.
Casey’s publication, Platformer: https://www.platformer.news/


FOLLOW US ON SOCIAL

TikTok:https://www.tiktok.com/@hardfork?is_from_webapp=1&sender_device=pc

Insta:
Kevin Roose- https://www.instagram.com/kevinroose/?hl=en
Casey Newton - https://www.instagram.com/crumbler/?hl=en

Twitter (X):
Kevin - https://x.com/kevinroose?s=20
Casey - https://x.com/CaseyNewton?s=20

Threads:
Kevin - https://www.threads.net/@kevinroose
Casey - https://www.threads.net/@crumbler

Subscribe to the audio only podcast:
Apple - https://podcasts.apple.com/us/podcast/hard-fork/id1528594034
Spotify - https://open.spotify.com/show/44fllCS2FTFr2x2kjP9xeT?si=f4a017fd2201479d
Amazon - https://music.amazon.com/podcasts/7c7fe198-e6a8-41a8-b0fe-1d46b976dcd8/hard-fork
Google - https://podcasts.google.com/feed/aHR0cHM6Ly9mZWVkcy5zaW1wbGVjYXN0LmNvbS9sMmk5WW5UZA==
The New York Times - https://www.nytimes.com/column/hard-fork


Credits
“Hard Fork” is hosted by Kevin Roose and Casey Newton. Produced by Rachel Cohn and Whitney Jones . Edited by Jen Poyant. Engineering by Alyssa Moxley and original music by Dan Powell, Elisheba Ittoop and Marion Lozano. Our audience is Nell Gallogly. Video production by Ryan Manning and Dylan Bergeson. Motion graphics by Phil Robibero Thumbnails by Julia Moburg, Elizabeth Bristow, and Harshal Duddalwar
Special thanks to Paula Szuchman, Pui-Wing Tam, Nell Gallogly, Kate LoPresti and Jeffrey Miranda.
...

HEAD OF US AI SAFETY SAYS 50-50 CHANCE AI K*LLS US ALL

April 23, 2024 3:43 pm

Description: Paul Christiano, head of the US AI Safety Institute, shares his alarming prediction that there's a 50% chance artificial intelligence could lead to human extinction. As a renowned AI safety expert, Christiano's sobering assessment highlights the critical need for responsible AI development and addressing potential risks. This thought-provoking video delves into the challenges we face as AI advances at an unprecedented pace. ...

Helen Toner on Firing Sam Altman - What REALLY Happened at OpenAI

May 28, 2024 10:33 pm

What REALLY happened at OpenAI? Former board member Helen Toner breaks her silence with shocking new details about Sam Altman's firing. Hear the exclusive, untold story on The TED AI Show: https://link.chtbl.com/TEDAI

I went into this convo expecting an AI policy maximalist, but found Helen Toner to be much more nuanced (and fascinating). Listen to Helen's take on OpenAI, self-governance, AI policy, and finding optimum acceleration.

#openai #artificialintelligence
...

Ari Emanuel calls Sam Altman a conman

June 28, 2024 11:41 pm

Aspen, June 2024

#AriEmanuel #SamAltman
...

Leopold Aschenbrenner - 2027 AGI, China/US Super-Intelligence Race, & The Return of History

June 4, 2024 6:00 pm

Chatted with my friend Leopold Aschenbrenner about the trillion dollar cluster, unhobblings + scaling = 2027 AGI, CCP espionage at AI labs, leaving OpenAI and starting an AGI investment firm, dangers of outsourcing clusters to the Middle East, & The Project.

Read the new essay series from Leopold this episode is based on here: https://situational-awareness.ai/

Timestamps
00:00:00 The trillion-dollar cluster and unhobbling
00:21:20 AI 2028: The return of history
00:41:15 Espionage & American AI superiority
01:09:09 Geopolitical implications of AI
01:32:12 State-led vs. private-led AI
02:13:12 Becoming Valedictorian of Columbia at 19
02:31:24 What happened at OpenAI
02:46:00 Intelligence explosion
03:26:47 Alignment
03:42:15 On Germany, and understanding foreign perspectives
03:57:53 Dwarkesh's immigration story and path to the podcast
04:03:16 Random questions
04:08:47 Launching an AGI hedge fund
04:20:03 Lessons from WWII
04:29:57 Coda: Frederick the Great

Links
Transcript: https://www.dwarkeshpatel.com/p/leopold-aschenbrenner
Apple Podcasts: https://podcasts.apple.com/us/podcast/leopold-aschenbrenner-china-us-super-intelligence-race/id1516093381?i=1000657821539
Spotify: https://open.spotify.com/episode/5NQFPblNw8ewxKolIDpiYN?si=6NaTHAugT2SxZrspW3lziw

Follow me on Twitter: https://twitter.com/dwarkesh_sp
Follow Leopold on Twitter: https://x.com/leopoldasch
...

Dario Amodei (Anthropic CEO) - $10 Billion Models, OpenAI, Scaling, & Alignment

August 8, 2023 3:40 pm

Here is my conversation with Dario Amodei, CEO of Anthropic.

Dario is hilarious and has fascinating takes on what these models are doing, why they scale so well, and what it will take to align them.

Transcript: https://www.dwarkeshpatel.com/dario-amodei
Apple Podcasts: https://apple.co/3rZOzPA
Spotify: https://spoti.fi/3QwMXXU

Follow me on Twitter: https://twitter.com/dwarkesh_sp

---
I’m running an experiment on this episode.

I’m not doing an ad.

Instead, I’m just going to ask you to pay for whatever value you feel you personally got out of this conversation.

Pay here: https://bit.ly/3ONINtp
---

(00:00:00) - Introduction
(00:01:00) - Scaling
(00:15:46) - Language
(00:22:58) - Economic Usefulness
(00:38:05) - Bioterrorism
(00:43:35) - Cybersecurity
(00:47:19) - Alignment & mechanistic interpretability
(00:57:43) - Does alignment research require scale?
(01:05:30) - Misuse vs misalignment
(01:09:06) - What if AI goes well?
(01:11:05) - China
(01:15:11) - How to think about alignment
(01:31:31) - Is modern security good enough?
(01:36:09) - Inefficiencies in training
(01:45:53) - Anthropic’s Long Term Benefit Trust
(01:51:18) - Is Claude conscious?
(01:56:14) - Keeping a low profile
...

Ilya Sutskever (OpenAI Chief Scientist) - Building AGI, Alignment, Spies, Microsoft, & Enlightenment

March 27, 2023 3:57 pm

Asked Ilya Sutskever (Chief Scientist of OpenAI) about
- time to AGI
- leaks and spies
- what's after generative models
- post AGI futures
- working with MSFT and competing with Google
- difficulty of aligning superhuman AI

Hope you enjoy as much as I did!

Transcript: https://www.dwarkeshpatel.com/p/ilya-sutskever
Apple Podcasts: https://apple.co/42H6c4D
Spotify: https://spoti.fi/3LRqOBd

Follow me on Twitter: https://twitter.com/dwarkesh_sp

Timestamps
00:00 Time to AGI
05:57 What’s after generative models?
10:57 Data, models, and research
15:27 Alignment
20:53 Post AGI Future
26:56 New ideas are overrated
36:22 Is progress inevitable?
41:27 Future Breakthroughs
...

Paul Christiano - Preventing an AI Takeover

October 31, 2023 4:16 pm

Talked with Paul Christiano (world’s leading AI safety researcher) about:

- Does he regret inventing RLHF?
- What do we want post-AGI world to look like (do we want to keep gods enslaved forever)?
- Why he has relatively modest timelines (40% by 2040, 15% by 2030),
- Why he’s leading the push to get to labs develop responsible scaling policies, & what it would take to prevent an AI coup or bioweapon,
- His current research into a new proof system, and how this could solve alignment by explaining model's behavior,
- and much more.

Open Philanthropy

Open Philanthropy is currently hiring for twenty-two different roles to reduce catastrophic risks from fast-moving advances in AI and biotechnology, including grantmaking, research, and operations.
For more information and to apply, please see this application: https://www.openphilanthropy.org/research/new-roles-on-our-gcr-team/
The deadline to apply is November 9th; make sure to check out those roles before they close:

Transcript: https://www.dwarkeshpatel.com/p/paul-christiano
Apple Podcasts: https://podcasts.apple.com/us/podcast/paul-christiano-preventing-an-ai-takeover/id1516093381?i=1000633226398
Spotify: https://open.spotify.com/episode/5vOuxDP246IG4t4K3EuEKj?si=VW7qTs8ZRHuQX9emnboGcA

Follow me on Twitter: https://twitter.com/dwarkesh_sp

Timestamps
(00:00:00) - What do we want post-AGI world to look like?
(00:24:25) - Timelines
(00:45:28) - Evolution vs gradient descent
(00:54:53) - Misalignment and takeover
(01:17:23) - Is alignment dual-use?
(01:31:38) - Responsible scaling policies
(01:58:25) - Paul’s alignment research
(02:35:01) - Will this revolutionize theoretical CS and math?
(02:46:11) - How Paul invented RLHF
(02:55:10) - Disagreements with Carl Shulman
(03:01:53) - Long TSMC but not NVIDIA
...

Ilya Sutskever | I saw that AGI must be safe and controllable | The consequences could be disastrous

July 2, 2024 3:00 pm

Popular and powerful AI tools

► AI generated video
➜https://bit.ly/MEandGPT

►AI generated pictures
➜https://bit.ly/PMEandGPT

► AI dubbing
➜https://bit.ly/MMEandGPT
...

OpenAI WHISTLEBLOWER Reveals What OpenAI Is Really Like!

June 20, 2024 1:00 am

Learn A.I With me - https://www.skool.com/postagiprepardness
🐤 Follow Me on Twitter https://twitter.com/TheAiGrid
🌐 Checkout My website - https://theaigrid.com/


Links From Todays Video:


Welcome to my channel where i bring you the latest breakthroughs in AI. From deep learning to robotics, i cover it all. My videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on my latest videos.

Was there anything i missed?

(For Business Enquiries) [email protected]

#LLM #Largelanguagemodel #chatgpt
#AI
#ArtificialIntelligence
#MachineLearning
#DeepLearning
#NeuralNetworks
#Robotics
#DataScience
...

Former OpenAI And Googlers Researchers BREAK SILENCE on AI

June 13, 2024 3:15 pm

Learn A.I With me - https://www.skool.com/postagiprepardness
🐤 Follow Me on Twitter https://twitter.com/TheAiGrid
🌐 Checkout My website - https://theaigrid.com/


Links From Todays Video:
https://righttowarn.ai

Welcome to my channel where i bring you the latest breakthroughs in AI. From deep learning to robotics, i cover it all. My videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on my latest videos.

Was there anything i missed?

(For Business Enquiries) [email protected]

#LLM #Largelanguagemodel #chatgpt
#AI
#ArtificialIntelligence
#MachineLearning
#DeepLearning
#NeuralNetworks
#Robotics
#DataScience
...

Ex-OpenAI Employee Just Revealed it ALL!

June 8, 2024 3:30 pm

Join My Private Community - https://www.patreon.com/TheAIGRID
🐤 Follow Me on Twitter https://twitter.com/TheAiGrid
🌐 Checkout My website - https://theaigrid.com/


Links From Todays Video:
https://situational-awareness.ai/wp-content/uploads/2024/06/situationalawareness.pdf

Welcome to my channel where i bring you the latest breakthroughs in AI. From deep learning to robotics, i cover it all. My videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on my latest videos.

Was there anything i missed?

(For Business Enquiries) [email protected]

#LLM #Largelanguagemodel #chatgpt
#AI
#ArtificialIntelligence
#MachineLearning
#DeepLearning
#NeuralNetworks
#Robotics
#DataScience
...

OpenAI Researcher BREAKS SILENCE "Agi Is NOT SAFE"

May 18, 2024 1:56 am

Join My Private Community - https://www.patreon.com/TheAIGRID
🐤 Follow Me on Twitter https://twitter.com/TheAiGrid
🌐 Checkout My website - https://theaigrid.com/

Links From Todays Video:
https://x.com/elonmusk/status/1791550077611217015
https://x.com/janleike/status/1791498174659715494
https://x.com/sama/status/1791543264090472660

Welcome to my channel where i bring you the latest breakthroughs in AI. From deep learning to robotics, i cover it all. My videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on my latest videos.

Was there anything i missed?

(For Business Enquiries) [email protected]

#LLM #Largelanguagemodel #chatgpt
#AI
#ArtificialIntelligence
#MachineLearning
#DeepLearning
#NeuralNetworks
#Robotics
#DataScience
...

Interviews and Talks

Industry Leaders and Notable Public Figures

Lethal Intelligence Microblog

Blow your mind with the latest stories

Favorite Microbloggers

Receive important updates!

Your email will not be shared with anyone and won’t be used for any reason besides notifying you when we have important updates or new content

×