OpenAI

OpenAI is the company behind Chat-GPT and GPT-4, leading the frontier AI race and arguably one of the most likely contenders, if not the most likely, to bring AGI to the world.

Initially founded as a non-profit in December 2015 as a counterweight to Google-Deepmind (which at the point was the only serious player on the AI front) but quickly evolved into a closed, for-profit money-making machine, racing to soon become the most capital heavy corporate on the planet (there are talks about raising $7 trillion and building its own nuclear plants).
Elon Musk who was a co-founder of its “Open, non-profit version”, famously said about the topic: “This would be like, let’s say you funded an organization to save the Amazon rainforest, and instead they became a lumber company, and chopped down the forest and sold it for money.”

The security setup of the company has been frequently referred to as Swiss cheese, repeatedly hacked by foreign agents and the general public getting informed months later. It’s been claimed by several employees, that by now, countries like China and Russia have more information and data about the frontier work done there than the US gov itself.

Most alarmingly, the culture has shifted away from the early days safety mindset, where half the company was working with existential risk on the top of the agenda, towards a short-term gains, capitalist focus, where the people who care are either gone or too scared to speak publicly and are forced to go through whistleblower channels to voice their concerns.

The selective pressure towards the wrong priorities has been very significant, as there have been multiple waves of resignations of the best Alignment and Safety scientists. The world’s top scientific talent working on the Alignment problem has left the company as they were not given the resources needed, the environment was toxic, and the race dynamics were putting all the pressure towards reckless acceleration.

What The Ex-OpenAI Safety Employees Are Worried About

July 3, 2024 4:42 pm

William Saunders is an ex-OpenAI Superallignment team member. Lawrence Lessig is a professor of Law and Leadership at Harvard Law School. The ...

OpenAI Whistleblower Speaks | Interview

June 7, 2024 9:33 pm

The OpenAI whistle-blower Daniel Kokotajlo speaks in one of his first public interviews about why he risked almost $2 million in equity to warn of ...

HEAD OF US AI SAFETY SAYS 50-50 CHANCE AI K*LLS US ALL

April 23, 2024 3:43 pm

Description: Paul Christiano, head of the US AI Safety Institute, shares his alarming prediction that there's a 50% chance artificial intelligence ...

Helen Toner on Firing Sam Altman - What REALLY Happened at OpenAI

May 28, 2024 10:33 pm

What REALLY happened at OpenAI? Former board member Helen Toner breaks her silence with shocking new details about Sam Altman's firing. Hear the ...

Ari Emanuel calls Sam Altman a conman

June 28, 2024 11:41 pm

Aspen, June 2024

#AriEmanuel #SamAltman

Leopold Aschenbrenner - Superhuman Intelligence By End of Decade

June 4, 2024 6:00 pm

Chatted with my friend Leopold Aschenbrenner about the trillion dollar cluster, unhobblings + scaling = 2027 AGI, CCP espionage at AI labs, leaving ...

Dario Amodei (Anthropic CEO) - The Hidden Pattern Behind Every AI Breakthrough

August 8, 2023 3:40 pm

Here is my conversation with Dario Amodei, CEO of Anthropic.

Dario is hilarious and has fascinating takes on what these models are doing,
...

Ilya Sutskever (OpenAI Chief Scientist) - Why Next-Token Prediction Could Surpass Human Intelligence

March 27, 2023 3:57 pm

Asked Ilya Sutskever (Chief Scientist of OpenAI) about
- time to AGI
- leaks and spies
- what's after generative models
- post AGI
...

Paul Christiano - Preventing an AI Takeover

October 31, 2023 4:16 pm

Talked with Paul Christiano (world’s leading AI safety researcher) about:

- Does he regret inventing RLHF?
- What do we want post-AGI
...

Ilya Sutskever | I saw that AGI must be safe and controllable | The consequences could be disastrous

July 2, 2024 3:00 pm

Popular and powerful AI tools

► AI generated video
https://bit.ly/MEandGPT

►AI generated
...

OpenAI WHISTLEBLOWER Reveals What OpenAI Is Really Like!

June 20, 2024 1:00 am

Learn A.I With me - https://www.skool.com/postagiprepardness
🐤 Follow Me on Twitter https://twitter.com/TheAiGrid
🌐 Checkout My website -
...

Former OpenAI And Googlers Researchers BREAK SILENCE on AI

June 13, 2024 3:15 pm

Learn A.I With me - https://www.skool.com/postagiprepardness
🐤 Follow Me on Twitter https://twitter.com/TheAiGrid
🌐 Checkout My website -
...

Ex-OpenAI Employee Just Revealed it ALL!

June 8, 2024 3:30 pm

Join My Private Community - https://www.patreon.com/TheAIGRID
🐤 Follow Me on Twitter https://twitter.com/TheAiGrid
🌐 Checkout My website -
...

OpenAI Researcher BREAKS SILENCE "Agi Is NOT SAFE"

May 18, 2024 1:56 am

Join My Private Community - https://www.patreon.com/TheAIGRID
🐤 Follow Me on Twitter https://twitter.com/TheAiGrid
🌐 Checkout My website -
...

Interviews and Talks

Industry Leaders and Notable Public Figures

Lethal Intelligence Microblog

Blow your mind with the latest stories

Favorite Microbloggers

×