Deep Dives

A collection of Technical explanations (still explained in layman terms), mainly by the one and only Robert Miles.

The list includes detailed explanations of the famous Concrete AI Safety Problems paper published in 2016 and deep explorations of some of the most important topics of the AI Alignment struggle, like the Orthogonality Thesis, Specification Gaming, Reward Hacking, Inner Misalignment and more

Invest some time to internalise this content and it will become obvious to you, how hard… how far and deep the Alignment Problem cuts and how desperately we, collectively as a species, need to pull our act together and focus all our effort on it, before the AI Race (Moloch) storms the capabilities further ahead beyond the point of no return ….

AI Safety Advocates

Watch videos of experts eloquently explaining AI Risk

Industry Leaders and Notables

Videos of famous public figures openly warning about AI Risk

Original Films

Lethal Intelligence Guide and Short Stories

Channels

Creators contributing to raising AI risk awareness

Publication

Blow your mind at the frontier of AI

Categories

Stay In The Know!

Your email will not be shared with anyone and won’t be used for any reason besides notifying you when we have important updates or new content

×