Crash Lessons

AI perceives humans as plants

You can take on pretty much any plant in the world.
You versus the plant and that plant is going down.

Liron explains the nature of Super-Intelligence, highlighting the orders of magnitude difference in speed.

You can take on pretty much any plant in the world. You versus the plant and that plant is going down. Liron explains the nature of Super-Intelligence, highlighting the orders of magnitude difference in speed.

Complexity is in the eye of the beholder

Liron is explaining how upcoming AI will be handling what we consider to be the most complex problems as trivial minor actions.

Liron is explaining how upcoming AI will be handling what we consider to be the most complex problems as trivial minor actions.

Demystifying “Orthogonality Thesis”

John is explaining how Intelligence levels and goals are independent of each-other and how we might end up into a Universe tiled with endless Philadelphia Eagle logos!

John is explaining how Intelligence levels and goals are independent of each-other and how we might end up into a Universe tiled with endless Philadelphia Eagle logos!

We talk a good game, as we lick our fingers with that factory-farmed food

John is explaining how AGI morality structures may very well be different from ours, similar to how we treat creatures of lesser intelligence.

John is explaining how AGI morality structures may very well be different from ours, similar to how we treat creatures of lesser intelligence.

AI Corrigibility explained

Liron is giving his take on the corrigibility problem, the crushingly hard open scientific problem of designing AI systems that don’t resist modifications, even if such actions might interfere with their original goals.

Liron is giving his take on the corrigibility problem, the crushingly hard open scientific problem of designing AI systems that don’t resist modifications, even if such actions might interfere with their original goals.

Rocket Alignment Analogy

To expect AI alignment to happen by default is like expecting a rocket randomly fired into the sky to land exactly where we would like it to.

Liron gives his thoughtful analysis on this visual analogy.

To expect AI alignment to happen by default is like expecting a rocket randomly fired into the sky to land exactly where we would like it to. Liron gives his thoughtful analysis on this visual analogy.

AI is not “just another technology”

AGI is not your next gadget, it’s your portal to an utterly different Earth!

Liron explains how we are facing a discontinuity akin to that our planet went through when life was born and started terraforming the previously dead surface.

AGI is not your next gadget, it’s your portal to an utterly different Earth! Liron explains how we are facing a discontinuity akin to that our planet went through when life was born and started terraforming the previously dead surface.

AI nature is to optimise against human bias

Liron gives us an intuition about how hard the AI alignment problem is at its core, drawing a contrast between the AI’s recursive self-improvement to build up its capabilities to be way super-human level and the thing we’re asking it to do at the same time, which is to be biased in the way that humans are biased and maintain the fragile human values.

Liron gives us an intuition about how hard the AI alignment problem is at its core, drawing a contrast between the AI’s recursive self-improvement to build up its capabilities to be way super-human level and the thing we’re asking it to do at the same time, which is to be biased in the way that […]

Publication

Blow your mind at the frontier of AI

Categories

Stay In The Know!

Your email will not be shared with anyone and won’t be used for any reason besides notifying you when we have important updates or new content

×