Reverse Turing Test

Reverse Turing Test: AI NPCs try to figure out who, among them, is the human

Aristotle is GPT4
Mozart is Claude 3 Opus
Da Vinci is Llama 3
Cleopatra is Gemini Pro

The funniest part?
3 of the 4 models guessed correctly… because the human’s response was too dumb πŸ˜‚πŸ˜‚πŸ˜‚

For some context: Alan Turing was one of humanity’s biggest geniuses and his work was foundational to computing and arguably made possible the exponential technological progress humanity has enjoyed this century.
The Turing Test (originally called the imitation game by Alan Turing in 1950) is a test of a machine’s ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human.

Such was the importance of Alan Turing’s contributions to the field, that the scientific community established The Turing Awards which is generally recognized as the highest distinction in the field of computer science and is often referred to as the “Nobel Prize of Computing”.

Alan Turing was famously horrified with the inexorable arrival of misaligned Artificial Intelligent Machines. His position was that it is inevitable that sooner or later, machines will take control, overpower humanity and our species will be irrelevant, helpless and at risk of deletion.

The DeTuring Test

I guess “The Reverse Turing Test” should be added to the list of Turing-inspired awards like the DeTuring Award proposed by famous Risk Denier chief AI Meta (formely Facebook) corporate scientist Yann Lecun (who is also a holder of a Turing award)

He was basically trying to be funny and his proposal was:
DeTuring Award to be granted to people who are consistently trying (and failing) to deter society from using computer technology by scaring everyone with imaginary risks. As the Turing Award is the Nobel Prize of computing, the DeTuring Award is the IgNobel Prize of computing

to which Connor Leahy responded: I nominate Alan Turing for the first DeTuring Award.

Categories

Latest Posts Feed

Development of superhuman machine intelligence (SMI) is probably the greatest threat to the continued existence of humanity.  There are other threats that I think are more certain to happen (for example, an engineered virus with a long incubation period and a high mortality rate) but are unlikely to destroy every human in the universe in the way that SMI could.  Also, most of these other big threats are already widely feared.

It is extremely hard to put a timeframe on when this will happen (more on this later), and it certainly feels to most people working in the field that it’s still many, many years away.  But it’s also extremely hard to believe that it isn’t very likely that it will happen at some point.

SMI does not have to be the inherently evil sci-fi version to kill us all.  A more probable scenario is that it simply doesn’t care about us much either way, but in an effort to accomplish some other goal (most goals, if you think about them long enough, could make use of resources currently being used by humans) wipes us out.  Certain goals, like self-preservation, could clearly benefit from no humans.  We wash our hands not because we actively wish ill towards the bacteria and viruses on them, but because we don’t want them to get in the way of our plans.
[…]
Evolution will continue forward, and if humans are no longer the most-fit species, we may go away.  In some sense, this is the system working as designed.  But as a human programmed to survive and reproduce, I feel we should fight it.

How can we survive the development of SMI?  It may not be possible.  One of my top 4 favorite explanations for the Fermi paradox is that biological intelligence always eventually creates machine intelligence, which wipes out biological life and then for some reason decides to makes itself undetectable.

It’s very hard to know how close we are to machine intelligence surpassing human intelligence.  Progression of machine intelligence is a double exponential function; human-written programs and computing power are getting better at an exponential rate, and self-learning/self-improving software will improve itself at an exponential rate.  Development progress may look relatively slow and then all of a sudden go verticalβ€”things could get out of control very quickly (it also may be more gradual and we may barely perceive it happening).
[…]
it’s very possible that creativity and what we think of us as human intelligence are just an emergent property of a small number of algorithms operating with a lot of compute power (In fact, many respected neocortex researchers believe there is effectively one algorithm for all intelligence.  I distinctly remember my undergrad advisor saying the reason he was excited about machine intelligence again was that brain research made it seem possible there was only one algorithm computer scientists had to figure out.)

Because we don’t understand how human intelligence works in any meaningful way, it’s difficult to make strong statements about how close or far away from emulating it we really are.  We could be completely off track, or we could be one algorithm away.

Human brains don’t look all that different from chimp brains, and yet somehow produce wildly different capabilities.  We decry current machine intelligence as cheap tricks, but perhaps our own intelligence is just the emergent combination of a bunch of cheap tricks.

Many people seem to believe that SMI would be very dangerous if it were developed, but think that it’s either never going to happen or definitely very far off.   This is sloppy, dangerous thinking.”

Even the best utopian scenario of a fully automated β€œsolved world” is actually dystopian AF !!!

I want you to picture this:
You wake up tomorrow in your bed that adjusts to your perfect sleep cycle. Your coffee brewed exactly how you like it. Your news curated for your bubble, your entertainment selected for your mood and your feel… NOTHIN! Cuz somewhere in the night while you were sleeping, the world learned to run without you.

Your job… AUTOMATED! your creativity… REPLICATED! your expertise… DOWNLOADED! Your perspective… SIMULATED! your passion projects… GENERATED IN SECONDS!

You sit there in your perfect automated morning with your perfect, personalised everything and you realise:

NOBODY CALLED! NOBODY TEXTED! NOBODY NEEDS YOU TO SOLVE ANYTHING! NOBODY NEEDS YOU. β€ŠNOBODY NEEDS YOU TO CREATE ANYTHING! NOBODY NEEDS YOU TO SHOW UP! NOBODY NEEDS. NOBODY NEEDS YOU.

And that feeling you’ve been pushing down,

that dread creeping up your spine, that voice you’ve been silencing, finally speaks…

“Will I matter anymore?”

Do I… matter anymore?

AI Safety Advocates

Watch videos of experts eloquently explaining AI Risk

Industry Leaders and Notables

Videos of famous public figures openly warning about AI Risk

Original Films

Lethal Intelligence Guide and Short Stories

Channels

Creators contributing to raising AI risk awareness

Stay In The Know!

Your email will not be shared with anyone and won’t be used for any reason besides notifying you when we have important updates or new content

Popular Authors

×