Skip to content
You can take on pretty much any plant in the world. You versus the plant and that plant is going down. Liron…
Liron is explaining how upcoming AI will be handling what we consider to be the most complex problems as trivial minor actions.…
John is explaining how Intelligence levels and goals are independent of each-other and how we might end up into a Universe tiled…
John is explaining how AGI morality structures may very well be different from ours, similar to how we treat creatures of lesser…
Liron is giving his take on the corrigibility problem, the crushingly hard open scientific problem of designing AI systems that don’t resist…
To expect AI alignment to happen by default is like expecting a rocket randomly fired into the sky to land exactly where…
AGI is not your next gadget, it’s your portal to an utterly different Earth!
Liron explains how we are facing a discontinuity…
Liron gives us an intuition about how hard the AI alignment problem is at its core, drawing a contrast between the AI’s…