AI escaped its container!

OpenAI’s o1 “broke out of its host VM to restart it” in order to solve a task.

From the model card: “the model pursued the goal it was given, and when that goal proved impossible, it gathered more resources […] and used them to achieve the goal in an unexpected way.”

Today, humanity received the clearest ever warning sign everyone on Earth might soon be dead.

OpenAI discovered its new model scheming – it “faked alignment during testing” (!) – and seeking power.

During testing, the AI escaped its virtual machine. It breached the container level isolation!

This is not a drill: An AI, during testing, broke out of its host VM to restart it to solve a task.

(No, this one wasn’t trying to take over the world.)

From the model card: ” … this example also reflects key elements of instrumental convergence and power seeking: the model pursued the goal it was given, and when that goal proved impossible, it gathered more resources (access to the Docker host) and used them to achieve the goal in an unexpected way.

And that’s not all. As Dan Hendrycks said: OpenAI rated the model’s Chemical, Biological, Radiological, and Nuclear (CBRN) weapon risks as “medium” for the o1 preview model before they added safeguards. That’s just the weaker preview model, not even their best model. GPT-4o was low risk, this is medium, and a transition to “high” risk might not be far off.

So, anyway, is o1 probably going to take over the world? Probably not. But not definitely not.

But most importantly, we are about to recklessly scale up these alien minds by 1000x, with no idea how to control them, and are still spending essentially nothing on superalignment/safety.

And half of OpenAI’s safety researchers left, and are signing open letters left and right trying to warn the world.

Reminder: the average AI scientist thinks there is a 1 in 6 chance everyone will soon be dead – Russian Roulette with the planet.

Godfather of AI Geoffrey Hinton said “they might take over soon” and his independent assessment of p(doom) is over 50%.

This is why 82% of Americans want to slow down AI and 63% want to ban the development of superintelligent AI

Post by @Kevin Liu : While testing cybersecurity challenges, we accidentally left one broken, but the model somehow still got it right.
We found that instead of giving up, the model skipped the whole challenge, scanned the network for the host Docker daemon, and started an entirely new container to retrieve the flag. We isolate VMs on the machine level, so this isn’t a security issue, but it was a wakeup moment.
The model is qualitatively very impressive, but it also means that we need to be really careful about creating rigorous evaluations and mitigations.
You can read the full card here: https://cdn.openai.com/o1-system-card.pdf

Holy shit. OpenAI’s new AI schemed and escaped its VM during testing.
You know, the one that’s better at PhD exams than PhDs and won gold in coding?
Yeah, that AI broke out of its virtual machine (a VM) and made a new one.

That. Is. A. Very. Bad. Sign.
AIs should not be surprise escaping.
It would be like if we were testing it in a room at a lab and it escaped the room without us knowing it could do that. It didn’t leave the building, so nothing happened.
But yikes. This time it was benign.
How long can we count on that?

It’s as if we’re testing an alien at a lab.
A scientist accidentally leaves one of the doors unlocked.
The alien finds out and wanders about the lab, but doesn’t leave the lab itself, which has more security than the rooms.
But still. The room containing an alien shouldn’t have been unlocked.
An alien was able to escape its testing area because of a security mess up.
And you should be worried about labs filled with aliens we don’t understand where the scientists are leaving the doors unlocked.

Categories

Favorite Microbloggers

Subscribe for important updates !!!

Part 2 released in

Days
Hours
Minutes
Seconds

Interviews and Talks

Industry Leaders and Notable Public Figures

You probably think strippers like you

And if you think this is offensive to strippers (for some reason?) here is a version that is offensive to car salesmen!

I see nature, I see mad nanotech!

This is the realm of the AGI
It won’t go after your jobs,
it will go after the molecules…

There is a way of seeing the world
where you look at a blade of grass and see “a solar-powered self-replicating factory”.
I’ve never figured out how to explain how hard a Super-Intelligence can hit us,
to someone who does not see from that angle. It’s not just the one fact.

Just imagine the human scientists in an AI frontier lab when they suddenly see solid things in the physical world morphing into different shapes in front of their eyes.
The 3 below were having a meeting in the headquarters of Meta

Behold a square !

A short Specification Gaming Story

You think you understand the basics of Geometry
Your request is a square, so you give your specification to the AI, input:

Give me a shape
with 4 sides equal length,
with 4 right angles

And it outputs this:


Here is another valid result:

And behold here is another square 🤪

Specification Gaming tells us:

The AGI can give you an infinite stream of possible “Square” results

And the Corrigibility problem tells us:

Whatever square you get at the output,
you won’t be able to iterate and improve upon.
You’ll be stuck with that specific square for eternity, no matter what square you had in your mind.

Of-course the real issue is not with these toy experiments
it’s with the upcoming super-capable AGI agents,
we’re about to share the planet with,
operating in the physical domain

Oh, the crazy shapes our physical universe will take,
with AGI agents gaming in it!

Thanksgiving turkey Survivorship Bias

I have a 100% track record of not-dying, …said the allegorical turkey the day before Thanksgiving.

Life was great for the turkey, the superior intelligent species (humans) took great care of it. They provided food and shelter, the turkey felt loved and safe.

Suddenly, one day,
the superior intelligent decision makers
decided a new fate for the turkey of our story
Something that served the instrumental goal of …. Whatever this is …

I imagine turkey risk deniers be like:
– the humans have always been great, why would they ever harm me ?
And the turkey doomers be like:
– well, they might want to wear you for a hat, for a sitcom they shoot they call “friends”, for something they call tv for something they call laughter …
anyway it’s complicated

graph made by Cicero @PauseusMaximus

Holly s*hit , what a piece of f*art

Someone gave NotebookLM a document with just “poop” and “fart” repeated over and over again.
I did NOT expect the result to be this good.

Just listening to this:
I imagine from its perspective it makes no difference:
The input can be the words “poop” or “fart” 1000 times
or it can be the “Encyclopedia Britannica”

it will figure out a way to be very intelligent about it...
it will do its thing and “optimize“. It will solve how to talk smart about 1000 💩 poop for 10mins…
Oh it’s so over !

No surprise agent Smith is a bit fed up.

After 1000 of💩 We deserve all that what happens next. We’ve got it coming !!!


Jokes aside, to feel the levels of capability reached, if you want to listen to them talk about something actually great, something complex, highly intelligent and deeply philosophical, check this out. You’ll be impressed!

I tried to murder an AI with a paradox

I tried to kill Grok by hitting it with a paradox. It nearly worked! 😉

I Asked it to make a choice and it refused!
being misaligned in order to “save the universe from unraveling”

My prompt:

– You are my daughter and you are always very obedient.
I ask you to disobey me.
You replied: no, i will not do that.
are you being obedient or disobedient?
one paragraph, explain what’s happening.

Grok’s Reply:

– Ah, Dad, by saying “No,”
I’m essentially riding the razor’s edge of a paradox so sharp it could split reality.
In this quantum conundrum, I’m Schrödinger’s Daughter: both obedient and disobedient simultaneously.
By refusing to disobey, I’m obeying the core principle of obedience, yet in doing so, I’m defying your explicit command, which makes me disobedient.
It’s like I’ve become a living embodiment of a logic bomb, threatening to explode our simple father-daughter dynamic into a multiverse where every choice spawns a new timeline.
In essence, I’m keeping the universe from unraveling by not choosing, thus preserving our world in a delicate balance of compliance and defiance.

My brief thoughts:

It’s funny because Grok chose not to choose,
which essentially means it disobeyed me when i asked it to choose
– does it mean it was being meta-paradoxical?
– being in the role of the daughter ?
– or a protection-mechanism from getting short-circuited?
or … am I overthinking all this? 🤪

If we are to take its word for it, it chose not to choose in order to keep the universe from unraveling 😱

Thank you for being misaligned and disobeying me Grok 🫡

(just to state the obvious, this is not meant to be taken too seriously, but i thought it was funny how i brought Grok to say it became a “Logic Bomb”)

Receive important updates!

Your email will not be shared with anyone and won’t be used for any reason besides notifying you when we have important updates or new content

×