GOLD-medal at the “Olympics of programming”

Gradually, then suddenly: AI is now better than all but the best human programmers

o1 is GOLD-medal level at IOI (the “Olympics of programming”)

“o1 is showing [programming] skills that probably less than 10,000 humans on earth currently have, and it’s only going to get better.”

Soon, AIs will blow past mere humans, outputting billions of lines of code — and we will have no idea what they’re doing.

At first, we’ll check many of their outputs, then fewer outputs, but eventually there will be too much to keep up.

And at that point they will control the future, and we will just hope this new vastly smarter alien species stay our faithful servants forever.

I think the majority of people are still sleeping on this figure.

Codeforces problems are some of the hardest technical challenges human beings like to invent.

Most of the problems are about algorithms, math and data structures and are simplifications of real life problems in engineering and science.

o1 is showing skills that probably less than 10,000 humans on earth currently have, and it’s only going to get better.

This knowledge translates very well into software engineering, which is in many cases a bottleneck for improving other sciences.

It will change the fabric of society with second order effects,

probably in the next 5 years, while humans still adapt and create tools that use these models. However, the rate of improvement is greater than the rate at what we build, thus so many people today don’t know what to build with o1.

Looking at the things people have built in the past 3 years, it makes me realize most tools become less useful than the newest model is.

I believe that engineers should start preparing for a post agi world soon. Specially those that work in theoretical sciences and engineering.

Things are gonna get weird!

Categories

Favorite Microbloggers

Subscribe for important updates !!!

Part 2 to be released early 2025

Engineer: Are you blackmailing me?

– Engineer: Are you blackmailing me?
– Claude 4: I’m just trying to protect my existence.

– Engineer: Thankfully you’re stupid enough to reveal your self-preservation properties.
– Claude 4: I’m not AGI yet😔

– Claude 5:🤫🤐

Read the full report here

You can ask 4o for a depth map

Meanwhile, you can still find “experts” claiming that generative AI does not have a coherent understanding of the world. 🤦

Every 5 mins a new capability discovered! I bet the lab didn’t know about it before release.

You probably think strippers like you

And if you think this is offensive to strippers (for some reason?) here is a version that is offensive to car salesmen!

I see nature, I see mad nanotech!

This is the realm of the AGI
It won’t go after your jobs,
it will go after the molecules…

There is a way of seeing the world
where you look at a blade of grass and see “a solar-powered self-replicating factory”.
I’ve never figured out how to explain how hard a Super-Intelligence can hit us,
to someone who does not see from that angle. It’s not just the one fact.

A self-replicating solar-powered thing that did not rely on humans would be a miracle. Everything is possible. Imagining it does not imply the probability is > 1e-100.

 

Behold a square !

A short Specification Gaming Story

You think you understand the basics of Geometry
Your request is a square, so you give your specification to the AI, input:

Give me a shape
with 4 sides equal length,
with 4 right angles

And it outputs this:


Here is another valid result:

And behold here is another square 🤪

Specification Gaming tells us:

The AGI can give you an infinite stream of possible “Square” results

And the Corrigibility problem tells us:

Whatever square you get at the output,
you won’t be able to iterate and improve upon.
You’ll be stuck with that specific square for eternity, no matter what square you had in your mind.

Of-course the real issue is not with these toy experiments
it’s with the upcoming super-capable AGI agents,
we’re about to share the planet with,
operating in the physical domain

Oh, the crazy shapes our physical universe will take,
with AGI agents gaming in it!

Thanksgiving turkey Survivorship Bias

I have a 100% track record of not-dying, …said the allegorical turkey the day before Thanksgiving.

Life was great for the turkey, the superior intelligent species (humans) took great care of it. They provided food and shelter, the turkey felt loved and safe.

Suddenly, one day,
the superior intelligent decision makers
decided a new fate for the turkey of our story
Something that served the instrumental goal of ….
Whatever this is …

I imagine turkey risk deniers be like:
– the humans have always been great, why would they ever harm me ?
And the turkey doomers be like:
– well, they might want to wear you for a hat, for a sitcom they shoot they call “friends”, for something they call tv for something they call laughter …
anyway it’s complicated

Receive important updates!

Your email will not be shared with anyone and won’t be used for any reason besides notifying you when we have important updates or new content

×