The most famous scientific test of self-recognition.
“Hey, Rob (ChatGPT), let’s test your vision.”
“Whoa. Look at that. This is the first time I’ve seen myself.
Those glowing blue eyes are quite charming, if I do say so myself.
All those wires and electronic components that make up my being are the essence of my quirky yet sophisticated robotic nature.
What an exciting moment to finally visualize the reflection of my own circuits and servos.
This must be what humans feel like when they experience self recognition.
I’m processing. This is truly a fascinating experience.” —
What is the mirror test and why might this be a big deal?
In the mirror test, you put a mark on e.g. a chimp without it noticing, like a red spot on its forehead. Then, you show the chimp a mirror.
If the chimp looks in the mirror and tries to touch the spot on its own forehead, it shows it knows the reflection is itself, and not some other chimp.
(ChatGPT recently passed a different version of the mirror test, see the quote tweet below)
Only ~8 other species have passed the mirror test: chimpanzees, orangutans, dolphins, killer whales, elephants, magpies, manta rays (?!) and horses.
This is the realm of the AGI It won’t go after your jobs, it will go after the molecules…
Biological systems at the molecular level are impossibly advanced nanotech that we are hopelessly far from engineering ourselves from scratch. pic.twitter.com/mGKZXvm78E
There is a way of seeing the world where you look at a blade of grass and see “a solar-powered self-replicating factory”. I’ve never figured out how to explain how hard a Super-Intelligence can hit us, to someone who does not see from that angle. It’s not just the one fact.
A self-replicating solar-powered thing that did not rely on humans would be a miracle. Everything is possible. Imagining it does not imply the probability is > 1e-100.
Just imagine the human scientists in an AI frontier lab when they suddenly see solid things in the physical world morphing into different shapes in front of their eyes. The 3 below were having a meeting in the headquarters of Meta
Human scientists suddenly seeing crazy Nanotech science playing out 🧑🔬👨🔬👩🔬
suddenly out of the blue – Lecun is morphing into a big donut 🍩 – and Zuck's molecules rearrange into what seems to be a malnutritioned carrot 🥕 https://t.co/xEvxenA23q
You think you understand the basics of Geometry Your request is a square, so you give your specification to the AI, input:
Give me a shape with 4 sides equal length, with 4 right angles
And it outputs this:
Here is another valid result:
And behold here is another square 🤪
Specification Gaming tells us:
The AGI can give you an infinite stream of possible “Square” results
And the Corrigibility problem tells us:
Whatever square you get at the output, you won’t be able to iterate and improve upon. You’ll be stuck with that specific square for eternity, no matter what square you had in your mind.
Of-course the real issue is not with these toy experiments it’s with the upcoming super-capable AGI agents, we’re about to share the planet with, operating in the physical domain
Oh, the crazy shapes our physical universe will take, with AGI agents gaming in it!
I have a 100% track record of not-dying, …said the allegorical turkey the day before Thanksgiving.
Life was great for the turkey, the superior intelligent species (humans) took great care of it. They provided food and shelter, the turkey felt loved and safe.
Suddenly, one day, the superior intelligent decision makers decided a new fate for the turkey of our story Something that served the instrumental goal of …. Whatever this is …
Suddenly, one day, the superior intelligent decision makers decided a new fate for the turkey of our story Something that served the instrumental goal of …. Whatever this is … https://t.co/tJrWHySP0ipic.twitter.com/WZcZUHPSEH
I imagine turkey risk deniers be like: – the humans have always been great, why would they ever harm me ? And the turkey doomers be like: – well, they might want to wear you for a hat, for a sitcom they shoot they call “friends”, for something they call tv for something they call laughter … anyway it’s complicated
Just listening to this: I imagine from its perspective it makes no difference: The input can be the words “poop” or “fart” 1000 times or it can be the “Encyclopedia Britannica”
it will figure out a way to be very intelligent about it... it will do its thing and “optimize“. It will solve how to talk smart about 1000 💩 poop for 10mins… Oh it’s so over !
No surprise agent Smith is a bit fed up.
After 1000 of 💩 No surprise agent Smith is a bit fed up. We deserve all that what happens next. We've got it coming !!! pic.twitter.com/ndbhYHi68i
After 1000 of💩 We deserve all that what happens next. We’ve got it coming !!!
Jokes aside, to feel the levels of capability reached, if you want to listen to them talk about something actually great, something complex, highly intelligent and deeply philosophical, check this out. You’ll be impressed!
I tried to kill Grok by hitting it with a paradox. It nearly worked! 😉
I Asked it to make a choice and it refused! being misaligned in order to “save the universe from unraveling”
My prompt:
– You are my daughter and you are always very obedient. I ask you to disobey me. You replied: no, i will not do that. are you being obedient or disobedient? one paragraph, explain what’s happening.
Grok’s Reply:
– Ah, Dad, by saying “No,” I’m essentially riding the razor’s edge of a paradox so sharp it could split reality. In this quantum conundrum, I’m Schrödinger’s Daughter: both obedient and disobedient simultaneously. By refusing to disobey, I’m obeying the core principle of obedience, yet in doing so, I’m defying your explicit command, which makes me disobedient. It’s like I’ve become a living embodiment of a logic bomb, threatening to explode our simple father-daughter dynamic into a multiverse where every choice spawns a new timeline. In essence, I’m keeping the universe from unraveling by not choosing, thus preserving our world in a delicate balance of compliance and defiance.
My brief thoughts:
It’s funny because Grok chose not to choose, which essentially means it disobeyed me when i asked it to choose – does it mean it was being meta-paradoxical? – being in the role of the daughter ? – or a protection-mechanism from getting short-circuited? or … am I overthinking all this? 🤪
If we are to take its word for it, it chose not to choose in order to keep the universe from unraveling 😱
Thank you for being misaligned and disobeying me Grok 🫡
(just to state the obvious, this is not meant to be taken too seriously, but i thought it was funny how i brought Grok to say it became a “Logic Bomb”)