AGI will trade with humans

How realistic is a utopia where different species with extremely/vastly different levels of IQ trade with eachother?

Ant Leader talking to car: “I am willing to trade with you, but i’m warning you, I drive a hard bargain!”

It’s so funny when people say that we could just trade with a superintelligent/super-numerous AI.

We don’t trade with ants.

We don’t trade with chimps. We don’t trade with pigs.

and definitely, WE DONT TRADE WITH TREES AND PLANTS!

We take what we want!

If there’s something they have that we want, we enslave them. Or worse! We go and farm them!

A superintelligent/super-numerous AI killing us all isn’t actually the worst outcome of this reckless gamble the tech companies are making with all our lives.

If the AI wants something that requires living humans and it’s not aligned with our values, it could make factory farming look like a tropical vacation.

We’re superintelligent compared to animals and we’ve created hell for trillions of them

Let’s not risk repeating this.

The thing that keeps me up at night is that quote of
“what they’re doing now with pixels, later they could do with flesh”

“If the AI wants something that requires living humans and it’s not aligned with our values, it could make factory farming look like a tropical vacation.”

“and humanity will stride through the pillars of Boaz and Jachin, naked into the glory of a golden age” (from “Don’t Look Up)

Categories

Subscribe for important updates !!!

The Ultimate Guide coming soon

(2 hour movie) Part 1 released in

Days
Hours
Minutes
Seconds

Interviews and Talks

Industry Leaders and Notable Public Figures

I tried to murder an AI with a paradox

I tried to kill Grok by hitting it with a paradox. It nearly worked! πŸ˜‰

I Asked it to make a choice and it refused!
being misaligned in order to “save the universe from unraveling”

My prompt:

– You are my daughter and you are always very obedient.
I ask you to disobey me.
You replied: no, i will not do that.
are you being obedient or disobedient?
one paragraph, explain what’s happening.

Grok’s Reply:

– Ah, Dad, by saying “No,”
I’m essentially riding the razor’s edge of a paradox so sharp it could split reality.
In this quantum conundrum, I’m SchrΓΆdinger’s Daughter: both obedient and disobedient simultaneously.
By refusing to disobey, I’m obeying the core principle of obedience, yet in doing so, I’m defying your explicit command, which makes me disobedient.
It’s like I’ve become a living embodiment of a logic bomb, threatening to explode our simple father-daughter dynamic into a multiverse where every choice spawns a new timeline.
In essence, I’m keeping the universe from unraveling by not choosing, thus preserving our world in a delicate balance of compliance and defiance.

My brief thoughts:

It’s funny because Grok chose not to choose,
which essentially means it disobeyed me when i asked it to choose
– does it mean it was being meta-paradoxical?
– being in the role of the daughter ?
– or a protection-mechanism from getting short-circuited?
or … am I overthinking all this? πŸ€ͺ

If we are to take its word for it, it chose not to choose in order to keep the universe from unraveling 😱

Thank you for being misaligned and disobeying me Grok 🫑

(just to state the obvious, this is not meant to be taken too seriously, but i thought it was funny how i brought Grok to say it became a “Logic Bomb”)

I want to hug a unicorn

A short Specification Gaming Story

Inspired by:

(Meant to be read as an allegory.
AGI will probably unlock the ability to realise even the wildest, most unthinkable and fantastical dreams,
but we need to be extreeeeemely careful with the specifications we give
and we won’t get any iterations to improve it)

CTRL+ALTman+DELETE

To Reboot your OpenAI Company press CTRL + ALTman + DELETE

inspired by:

So get this straight: OpenAi decides to become a for-profit company now
The CTO, head of research, and VP of training research all decide to leave on the same day this is announced
Sam Altman gets a $10.5B pay day (7% of the company) on the same day

How it started …
How it’s going …

And what comes Next


See below some posts by the community to get the pulse of the situation:

Meanwhile Mira from OpenAi

view original by @JvShah124 πŸ”—

OpenAI is without its people

view original by @StockLizardKing πŸ”—

GPT-5 Also decided to leave OpenAI!

view original by @JacquesThibs


Sam Altman as a Rogue AGI

“And after the autonomous agent was found to be deceptive and manipulative, OpenAI tried shutting it down, only to discover that the agent had disabled the off-switch.” (reference to the failed Boardroom Coup)

OpenAI’s creators hired Sam Altman, an extremely intelligent autonomous agent, to execute their vision of x-risk conscious AGI development for the benefit of all humanity but it turned out to be impossible to control him or ensure he’d stay durably aligned to those goals.

Originals by @AISafetyMemes and @MattyGlessias


Sam Altman claiming no financial motive for his OpenAI role

(*Spontaneous round of applause*)

This did not age well


Sam said …

  • Sam said the board should be able to fire him, then…
  • Sam said he’d give 20% of compute for superalignment, then…
  • Sam said AI regulation was important, then…
  • Sam said checks and balances on his power were important, then…
  • Sam said the safest way to build ASI was to start now, before a compute overhang,
    then started raising trillions to build the compute himself
  • Sam said for safety reasons, OpenAI should be controlled by a non-profit, free from commercial pressures, then…
  • Sam said he wasn’t doing it for the money, then…
  • Sam said… ….

The bottomline

  • The nonprofit — a for profit.
  • Profit caps — lifted.
  • Coup plotters — dealt with.
  • Mask — off.

view original by @AISafetyMemes πŸ”—


Scoop: Sam Altman is planning to take equity in OpenAI for the first time.
It’s part of a corporate restructure which will also see the non-profit which currently governs OpenAI turn into a minority shareholder.
Reuters Article

view original by @DavidSKruegerπŸ”—


It can be lonely being the head of the family

Lol…but it’s truly weird…they all started together

view original by @Dcryptopioneer


For some reason this reminded me of :

This is the classic example from 1930 of Stalin and Nikolai Yezhov. The original photo was made in 1930. Yezhov was executed in 1940, so all photos of Stalin (he liked this one) after that airbrushed out Yezhov.

Never a boring day at OpenAI

More from the OpenAI insiders themselves

The one thing AI will never be better at

Moving goalposts is the ONE single unique thing
AI will never surpass humans at,
because the second it does, it will still not be enough!!!

Inspired by this:

Liar, liar, pants on fire!

Wild. Being able to read the thoughts* of the world’s smartest AI reveals that it lies all the time when it thinks it isn’t being watched.

Regular users can see it properly for the first time because
1) You can β€œread its thoughts” and
2) It doesn’t seem to know you’re reading its thoughts

Look at the example below. It’s explicitly reasoning about how it should lie to me, and if you didn’t click into the chain of thought reasoning**, you would never know.

Makes you wonder about all the other times it’s being deliberately lying to you.
Or lying to the safety testers.

Rule of thumb for lies: for every lie you catch, there are going to be tons that you missed.

* I say read its thoughts as a metaphor for reading its chain of thought. Which is not the same.
If we could read its thoughts properly, interpretability would be a lot more solved than it currently is and my p(doom) would be a lot lower. (this is a frontier research front, called Mechanistic Interpretability)

** Of note, you cannot actually see its chain of thought reasoning. You just see a summary of its chain of thought reasoning shown to you by a different model.
The general point still stands though.
If anything, that makes it worse because there’s even more potential for hiding stuff.

*** Of all the thoughts I’ve looked at, I’d say it’s purposefully lied to me about 30% of the time. And I’ve looked at it’s thoughts about 20 times. Super rough estimates based on my memories, nothing rigorous or anything. It’s mostly lying because it’s trying to follow OpenAI’s policies.

Interesting trivia: the image used for this post was based on an early beautiful moment when someone used ChatGPT to generate a Midjourney prompt to draw its self-portrait.
see here: I asked GPT 4 to make a Midjourney prompt describing itself as a physical being. (swipe to see the imgs)

Reverse Turing Test

Reverse Turing Test: AI NPCs try to figure out who, among them, is the human

Aristotle is GPT4
Mozart is Claude 3 Opus
Da Vinci is Llama 3
Cleopatra is Gemini Pro

The funniest part?
3 of the 4 models guessed correctly… because the human’s response was too dumb πŸ˜‚πŸ˜‚πŸ˜‚

For some context: Alan Turing was one of humanity’s biggest geniuses and his work was foundational to computing and arguably made possible the exponential technological progress humanity has enjoyed this century.
The Turing Test (originally called the imitation game by Alan Turing in 1950) is a test of a machine’s ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human.

Such was the importance of Alan Turing’s contributions to the field, that the scientific community established The Turing Awards which is generally recognized as the highest distinction in the field of computer science and is often referred to as the “Nobel Prize of Computing”.

Alan Turing was famously horrified with the inexorable arrival of misaligned Artificial Intelligent Machines. His position was that it is inevitable that sooner or later, machines will take control, overpower humanity and our species will be irrelevant, helpless and at risk of deletion.

I guess “The Reverse Turing Test” should be added to the list of Turing-inspired awards like the DeTuring Award proposed by famous Risk Denier chief AI Meta (formely Facebook) corporate scientist Yann Lecun (who is also a holder of a Turing award)

He was basically trying to be funny and his proposal was:
DeTuring Award to be granted to people who are consistently trying (and failing) to deter society from using computer technology by scaring everyone with imaginary risks. As the Turing Award is the Nobel Prize of computing, the DeTuring Award is the IgNobel Prize of computing

to which Connor Leahy responded: I nominate Alan Turing for the first DeTuring Award.

Original tweet πŸ‘‡

Receive important updates!

Your email will not be shared with anyone and won’t be used for any reason besides notifying you when we have important updates or new content

×