Elon Musk

One of the richest people on the planet, famous for SpaceX, Tesla, X (formally known as Twitter), XAI (Grok), Neuralink, The Boring Company, and more…

On the topic of AI his story is extraordinary; Concerned about existential risks from Google’s dominance in AI, he co-founded OpenAI to foster safer development. Ironically, OpenAI’s GPT-3 breakthrough sparked an intense AI race, undermining his original intent.

He launched an xAI model to rival Microsoft, Google, and Anthropic, prioritizing truth-seeking and curiosity. Despite his noble intentions, experts warn he’s repeating past errors, inadvertently hastening an AI-driven “doom train.”

Read about xAI and why “maximally curious” won’t solve alignment:Damnit Elon, your gut is wrong again!!! 😧

About xAI

Elon Musk’s hope about a maximally  curious AI (xAI – Grok) seems to be that a god-level entity that is maximising curiosity and truth-seeking, will be “obsessed” about preservation of value, because it is “more interesting”. So the claim seems to be that, if i find you infinitely interesting, i will never destroy you.
In fact, if I were a God-level entity that finds everything infinitely interesting, i would not destroy anything because then, after destruction iI would not be able to observe and study it. 

The Cake is a Lie!!!

There is something to this theory, it might be directionally correct somehow, but it’s not reassuring when considering other obvious paths, like : what if this super-intelligent “scientist” finds it super-interesting to conduct infinite “lab” experiments with humans… or what if it gets very curious about answering a question like “how long a ladder made out of human bones would be” or about, “how awesome it would be to harness like the energy around a black hole, it would be infinitely fascinating to put one together nearby in the solar system” … 

Here is a few resources critical of Elon Musk’s approach with xAI and Grok:

Damnit Elon, your gut is wrong about AI, and you’re making a deadly mistake. Maximizing *any* simple criterion means we die!
“Maximum truth-seeking & curiosity” means we die:
Valuing maximum truth implies a preference for strategies like building giant telescopes, particle accelerators, supercomputing clusters, and other scientific instruments to learn the maximum amount of truth.
It’s a matter of logical truth that more resources, more power, and more freedom (i.e. no more meddling humans around) to build, are rational actions for a superintelligent truth-maximizing AI.
“Maximum interestingness” means we die:
Maybe the AI will agree with you that a world with humans is more interesting than a completely dead world. But why would humans be maximally interesting? Why would a planet of humans be the most interesting possible world available to an AI?
Isn’t it interesting to have the largest variety of biological species competing with each other, regardless of whether they have brains or consciousness?
Isn’t it interesting to maximize entropy by speeding up the growth of black holes and hastening the heat death of the universe? (Rhetorical question, though some folks here would actually say yes🤦‍♂️

By the time you attempt  to clarify what you really mean by “interesting”:

  • Sentient, conscious, reflective beings are interesting
  • Love is interesting
  • Social relationships are interesting
  • Freedom is interesting
  • It’s not interesting if someone uses their freedom to make gray goo nanotech that chews through the planet.
  • Etc…Etc…

Then you realize your gut hasn’t begun to address the problem you claim to be addressing. Adequately defining “interesting” is equivalent to the original AI alignment problem your gut was tasked with.
Your gut produced a keyword with a positive connotation (“interestingness”), and you proceeded to frame that keyword as a sketch of a solution to AI alignment. By doing this, you’re misleading your audience to think that somehow the solution to AI alignment is *not* incredibly subtle, *not* plausibly intractable to human researchers in this decade.

> Ok Liron, but you’re attacking a straw man. Surely Elon doesn’t literally mean “maximizing”; he just thinks those values are important.
No, that’s the entire alignment problem that we need to solve 😭: the problem of how to trade off all the complex values that define what it means for the universe to be better than literally nothing. It doesn’t help at all to propose “maximizing” anything.
The problem is that preferring too much of any one particular value creates a hellish nightmare scenario, not a near-miss of heaven. Only an extremely subtle mix of human values can ever plausibly be an optimization criterion that an AI can use to steer the future toward a good outcome.
— So please Elon, read @MIRIBerkeley‘s research and warnings in more detail than you have in the past.
Please Elon, don’t be in this position where your gut can deceive you with hopeless optimistic-sounding answers.
You’re often the closest thing humanity has to being the adult in the room. We need you to shift to a higher gear of seriousness on AI alignment right now.
🙏
by Liron Shapira

AI Safety Advocates

Watch videos of experts eloquently explaining AI Risk

Industry Leaders and Notables

Videos of famous public figures openly warning about AI Risk

Original Films

Lethal Intelligence Guide and Short Stories

Channels

Creators contributing to raising AI risk awareness

Publication

Blow your mind at the frontier of AI

Categories

Stay In The Know!

Your email will not be shared with anyone and won’t be used for any reason besides notifying you when we have important updates or new content

×