AI Alignment in a nutshell

Al Alignment, which we cannot define, will be solved by rules on which none of us agree, based on values that exist in conflict, for a future technology that we do not know how to build, which we could never fully understand, must be provably perfect to prevent unpredictable and untestable scenarios for failure, of a machine whose entire purpose is to outsmart all of us and think of all possibilities that we did not.

Categories

Latest Posts Feed

Humans do not understand exponentials
Humans do not understand exponentials
Humans do not understand exponentials
Humans do not understand exponentials
Humans do not understand exponentials
Humans do not understand exponentials
Humans do not understand exponentials
Humans do not understand exponentials
Humans do not understand exponentials
Humans do not understand exponentials
……

AI Safety Advocates

Watch videos of experts eloquently explaining AI Risk

Industry Leaders and Notables

Videos of famous public figures openly warning about AI Risk

Original Films

Lethal Intelligence Guide and Short Stories

Channels

Creators contributing to raising AI risk awareness

Stay In The Know!

Your email will not be shared with anyone and won’t be used for any reason besides notifying you when we have important updates or new content

Popular Authors

×