OpenAI is the company behind Chat-GPT and GPT-4, leading the frontier AI race and arguably one of the most likely contenders, if not the most likely, to bring AGI to the world.
Initially founded as a non-profit in December 2015 as a counterweight to Google-Deepmind (which at the point was the only serious player on the AI front) but quickly evolved into a closed, for-profit money-making machine, racing to soon become the most capital heavy corporate on the planet (there are talks about raising $7 trillion and building its own nuclear plants).
Elon Musk who was a co-founder of its “Open, non-profit version”, famously said about the topic: “This would be like, let’s say you funded an organization to save the Amazon rainforest, and instead they became a lumber company, and chopped down the forest and sold it for money.”
The security setup of the company has been frequently referred to as Swiss cheese, repeatedly hacked by foreign agents and the general public getting informed months later. It’s been claimed by several employees, that by now, countries like China and Russia have more information and data about the frontier work done there than the US gov itself.
Most alarmingly, the culture has shifted away from the early days safety mindset, where half the company was working with existential risk on the top of the agenda, towards a short-term gains, capitalist focus, where the people who care are either gone or too scared to speak publicly and are forced to go through whistleblower channels to voice their concerns.
The selective pressure towards the wrong priorities has been very significant, as there have been multiple waves of resignations of the best Alignment and Safety scientists. The world’s top scientific talent working on the Alignment problem has left the company as they were not given the resources needed, the environment was toxic, and the race dynamics were putting all the pressure towards reckless acceleration.
The Super-alignment team dissolved at May 17, 2024
Read here for more information about what exactly happened:
I lost trust: Why the OpenAI team in charge of safeguarding humanity imploded
OpenAI whistleblower Daniel Kokotajlo: Nearly half of the AI safety researchers at OpenAI have left the company.
— ControlAI (@ai_ctrl) August 27, 2024
This includes the previously unreported departures of Jan Hendrik Kirchner, Collin Burns, Jeffrey Wu, Jonathan Uesato, Steven Bills, Yuri Burda, and Todor Markov.… pic.twitter.com/WDjqZ9xg0a
OpenAI co-founder Wojciech Zaremba compares self-modifying AI to cancer:
— ControlAI (@ai_ctrl) July 18, 2024
• Once there is an increased number of AIs and they're modifying their own code, there is a process of natural selection.
• The AI that wants to maximally spread will be the one that exists. pic.twitter.com/38E4jj6bTG