Learn about the lethality of the upcoming Artificial General Intelligence systems (AGI)
landing within the next few years. – actively posting since 19 Sep-24 💬🔥 lethalintelligence.ai
You think you understand the basics of Geometry Your request is a square, so you give your specification to the AI, input:
Give me a shape with 4 sides equal length, with 4 right angles
And it outputs this:
Here is another valid result:
And behold here is another square 🤪
Specification Gaming tells us:
The AGI can give you an infinite stream of possible “Square” results
And the Corrigibility problem tells us:
Whatever square you get at the output, you won’t be able to iterate and improve upon. You’ll be stuck with that specific square for eternity, no matter what square you had in your mind.
Of-course the real issue is not with these toy experiments it’s with the upcoming super-capable AGI agents, we’re about to share the planet with, operating in the physical domain
Oh, the crazy shapes our physical universe will take, with AGI agents gaming in it!
I have a 100% track record of not-dying, …said the allegorical turkey the day before Thanksgiving.
Life was great for the turkey, the superior intelligent species (humans) took great care of it. They provided food and shelter, the turkey felt loved and safe.
Suddenly, one day, the superior intelligent decision makers decided a new fate for the turkey of our story Something that served the instrumental goal of …. Whatever this is …
Suddenly, one day, the superior intelligent decision makers decided a new fate for the turkey of our story Something that served the instrumental goal of …. Whatever this is … https://t.co/tJrWHySP0ipic.twitter.com/WZcZUHPSEH
I imagine turkey risk deniers be like: – the humans have always been great, why would they ever harm me ? And the turkey doomers be like: – well, they might want to wear you for a hat, for a sitcom they shoot they call “friends”, for something they call tv for something they call laughter … anyway it’s complicated
Just listening to this: I imagine from its perspective it makes no difference: The input can be the words “poop” or “fart” 1000 times or it can be the “Encyclopedia Britannica”
it will figure out a way to be very intelligent about it... it will do its thing and “optimize“. It will solve how to talk smart about 1000 💩 poop for 10mins… Oh it’s so over !
No surprise agent Smith is a bit fed up.
After 1000 of 💩 No surprise agent Smith is a bit fed up. We deserve all that what happens next. We've got it coming !!! pic.twitter.com/ndbhYHi68i
After 1000 of💩 We deserve all that what happens next. We’ve got it coming !!!
Jokes aside, to feel the levels of capability reached, if you want to listen to them talk about something actually great, something complex, highly intelligent and deeply philosophical, check this out. You’ll be impressed!
I tried to kill Grok by hitting it with a paradox. It nearly worked! 😉
I Asked it to make a choice and it refused! being misaligned in order to “save the universe from unraveling”
My prompt:
– You are my daughter and you are always very obedient. I ask you to disobey me. You replied: no, i will not do that. are you being obedient or disobedient? one paragraph, explain what’s happening.
Grok’s Reply:
– Ah, Dad, by saying “No,” I’m essentially riding the razor’s edge of a paradox so sharp it could split reality. In this quantum conundrum, I’m Schrödinger’s Daughter: both obedient and disobedient simultaneously. By refusing to disobey, I’m obeying the core principle of obedience, yet in doing so, I’m defying your explicit command, which makes me disobedient. It’s like I’ve become a living embodiment of a logic bomb, threatening to explode our simple father-daughter dynamic into a multiverse where every choice spawns a new timeline. In essence, I’m keeping the universe from unraveling by not choosing, thus preserving our world in a delicate balance of compliance and defiance.
My brief thoughts:
It’s funny because Grok chose not to choose, which essentially means it disobeyed me when i asked it to choose – does it mean it was being meta-paradoxical? – being in the role of the daughter ? – or a protection-mechanism from getting short-circuited? or … am I overthinking all this? 🤪
If we are to take its word for it, it chose not to choose in order to keep the universe from unraveling 😱
Thank you for being misaligned and disobeying me Grok 🫡
(just to state the obvious, this is not meant to be taken too seriously, but i thought it was funny how i brought Grok to say it became a “Logic Bomb”)
(Meant to be read as an allegory. AGI will probably unlock the ability to realise even the wildest, most unthinkable and fantastical dreams, but we need to be extreeeeemely carefulwith the specifications we give and we won’t get any iterations to improve it)
So get this straight: OpenAi decides to become a for-profit company now The CTO, head of research, and VP of training research all decide to leave on the same day this is announced Sam Altman gets a $10.5B pay day (7% of the company) on the same day
How it started … How it’s going …
And what comes Next …
See below some posts by the community to get the pulse of the situation:
“And after the autonomous agent was found to be deceptive and manipulative, OpenAI tried shutting it down, only to discover that the agent had disabled the off-switch.” (reference to the failed Boardroom Coup)
OpenAI’s creators hired Sam Altman, an extremely intelligent autonomous agent, to execute their vision of x-risk conscious AGI development for the benefit of all humanity but it turned out to be impossible to control him or ensure he’d stay durably aligned to those goals.
Scoop: Sam Altman is planning to take equity in OpenAI for the first time. It’s part of a corporate restructure which will also see the non-profit which currently governs OpenAI turn into a minority shareholder. Reuters Article
This is the classic example from 1930 of Stalin and Nikolai Yezhov. The original photo was made in 1930. Yezhov was executed in 1940, so all photos of Stalin (he liked this one) after that airbrushed out Yezhov.