Elon Musk​

One of the most famous and richest people on the planet, often referred to as a miracle maker for making the impossible possible with leading revolutionary  enterprises, such as SpaceX, Tesla, Neuralink, X (formally known as Twitter), The Boring Company, and more…
On the topic of AI his story is also extraordinary.
He cofounded OpenAI because he was worried about the existential risks he saw coming from the fact that Google was the only player in the field. His well-intentioned actions backfired when OpenAI eventually disrupted everything with GPT3 and started the AI race to the bottom.
He recently brought his own xAI model to compete with Microsoft, Google and Anthropic, with a focus on purity of truth-seeking and curiosity. Many experts argue that while his motives are once again pure,  he is repeating past mistakes and in fact he is unwillingly accelerating the doom train significantly.

I mean with artificial intelligence … we are summoning the demon 😈❤️‍🔥👿

Once there is awareness, people will be extremely afraid — as they should be.

Larry Page called me a specist.

We are on the event horizon of the black hole that is Artificial Super Intelligence.

AI is existential risk at – UK safety summit – BBC News

Mark my words, AI is far more dangerous than Nukes

When Elon Musk Realized Jack Ma is an Idiot!

When Elon Musk freaked Joe Rogan out!

Singularity by 2025

AI and the end of civilization

Implicit misalignment

10 to 20% probability that AI annihilates us.

I am the reason OpenAI exists

I met with Obama and Congress

I tried to convince people to slow down AI, to regulate AI for years. It was futile. Nobody listened.

About xAI

Elon’s hope about a maximally  curious AI (xAI – Grok) seems to be that a god-level entity that is maximising curiosity and truth-seeking, will be “obsessed” about preservation of value, because it is “more interesting”. So the claim seems to be that, if i find you infinitely interesting, i will never destroy you.
In fact, if I were a God-level entity that finds everything infinitely interesting, i would not destroy anything because then, after destruction iI would not be able to observe and study it. 

There is something to this theory, it might be directionally correct somehow, but it’s not reassuring when considering other obvious paths, like : what if this super-intelligent “scientist” finds it super-interesting to conduct infinite “lab” experiments with humans… or what if it gets very curious about answering a question like “how long a ladder made out of human bones would be” or about, “how awesome it would be to harness like the energy around a black hole, it would be infinitely fascinating to put one together nearby in the solar system” … 

Here is a few resources critical of Elon’s approach with xAI and Grok:

The Cake is a Lie!!!
Maximally curious AGI.

Damnit Elon, your gut is wrong about AI, and you’re making a deadly mistake. Maximizing *any* simple criterion means we die!
“Maximum truth-seeking & curiosity” means we die:
Valuing maximum truth implies a preference for strategies like building giant telescopes, particle accelerators, supercomputing clusters, and other scientific instruments to learn the maximum amount of truth.
It’s a matter of logical truth that more resources, more power, and more freedom (i.e. no more meddling humans around) to build, are rational actions for a superintelligent truth-maximizing AI.
“Maximum interestingness” means we die:
Maybe the AI will agree with you that a world with humans is more interesting than a completely dead world. But why would humans be maximally interesting? Why would a planet of humans be the most interesting possible world available to an AI?
Isn’t it interesting to have the largest variety of biological species competing with each other, regardless of whether they have brains or consciousness?
Isn’t it interesting to maximize entropy by speeding up the growth of black holes and hastening the heat death of the universe? (Rhetorical question, though some folks here would actually say yes🤦‍♂️

By the time you attempt  to clarify what you really mean by “interesting”:

  • Sentient, conscious, reflective beings are interesting
  • Love is interesting
  • Social relationships are interesting
  • Freedom is interesting
  • It’s not interesting if someone uses their freedom to make gray goo nanotech that chews through the planet.
  • Etc…Etc…

Then you realize your gut hasn’t begun to address the problem you claim to be addressing. Adequately defining “interesting” is equivalent to the original AI alignment problem your gut was tasked with.
Your gut produced a keyword with a positive connotation (“interestingness”), and you proceeded to frame that keyword as a sketch of a solution to AI alignment. By doing this, you’re misleading your audience to think that somehow the solution to AI alignment is *not* incredibly subtle, *not* plausibly intractable to human researchers in this decade.

> Ok Liron, but you’re attacking a straw man. Surely Elon doesn’t literally mean “maximizing”; he just thinks those values are important.
No, that’s the entire alignment problem that we need to solve 😭: the problem of how to trade off all the complex values that define what it means for the universe to be better than literally nothing. It doesn’t help at all to propose “maximizing” anything.
The problem is that preferring too much of any one particular value creates a hellish nightmare scenario, not a near-miss of heaven. Only an extremely subtle mix of human values can ever plausibly be an optimization criterion that an AI can use to steer the future toward a good outcome.
— So please Elon, read @MIRIBerkeley‘s research and warnings in more detail than you have in the past.
Please Elon, don’t be in this position where your gut can deceive you with hopeless optimistic-sounding answers.
You’re often the closest thing humanity has to being the adult in the room. We need you to shift to a higher gear of seriousness on AI alignment right now.
🙏
by Liron Shapira

Lethal Intelligence Microblog

Blow your mind with the latest stories

Favorite Microbloggers

Receive important updates!

Your email will not be shared with anyone and won’t be used for any reason besides notifying you when we have important updates or new content

×