Moving goalposts is the ONE single unique thing
AI will never surpass humans at,
because the second it does, it will still not be enough!!!

Moving goalposts is the ONE single unique thing
AI will never surpass humans at,
because the second it does, it will still not be enough!!!
Subscribe for important updates !!!
Learn about the issue by some of the best explainers out there
– Engineer: Are you blackmailing me?
– Claude 4: I’m just trying to protect my existence.
– Engineer: Thankfully you’re stupid enough to reveal your self-preservation properties.
– Claude 4: I’m not AGI yet😔
– Claude 5:🤫🤐
Read the full report here
Meanwhile, you can still find “experts” claiming that generative AI does not have a coherent understanding of the world. 🤦
Every 5 mins a new capability discovered! I bet the lab didn’t know about it before release.
And if you think this is offensive to strippers (for some reason?) here is a version that is offensive to car salesmen!
This is the realm of the AGI
It won’t go after your jobs,
it will go after the molecules…
There is a way of seeing the world
where you look at a blade of grass and see “a solar-powered self-replicating factory”.
I’ve never figured out how to explain how hard a Super-Intelligence can hit us,
to someone who does not see from that angle. It’s not just the one fact.
Biological systems at the molecular level are impossibly advanced nanotech that we are hopelessly far from engineering ourselves from scratch. pic.twitter.com/mGKZXvm78E
— Andrew Côté (@Andercot) October 7, 2024
A self-replicating solar-powered thing that did not rely on humans would be a miracle. Everything is possible. Imagining it does not imply the probability is > 1e-100.
A short Specification Gaming Story
You think you understand the basics of Geometry
Your request is a square, so you give your specification to the AI, input:
Give me a shape
with 4 sides equal length,
with 4 right angles
And it outputs this:
Here is another valid result:
And behold here is another square 🤪
Specification Gaming tells us:
The AGI can give you an infinite stream of possible “Square” results
And the Corrigibility problem tells us:
Whatever square you get at the output,
you won’t be able to iterate and improve upon.
You’ll be stuck with that specific square for eternity, no matter what square you had in your mind.
Of-course the real issue is not with these toy experiments
it’s with the upcoming super-capable AGI agents,
we’re about to share the planet with,
operating in the physical domain
Oh, the crazy shapes our physical universe will take,
with AGI agents gaming in it!
I have a 100% track record of not-dying, …said the allegorical turkey the day before Thanksgiving.
Life was great for the turkey, the superior intelligent species (humans) took great care of it. They provided food and shelter, the turkey felt loved and safe.
Suddenly, one day,
the superior intelligent decision makers
decided a new fate for the turkey of our story
Something that served the instrumental goal of ….
Whatever this is …
I imagine turkey risk deniers be like:
– the humans have always been great, why would they ever harm me ?
And the turkey doomers be like:
– well, they might want to wear you for a hat, for a sitcom they shoot they call “friends”, for something they call tv for something they call laughter …
anyway it’s complicated
Subscribe for important updates !!!
© 2024 Lethal Intelligence – Ai. All rights reserved.