Dr Roman Vladimirovich Yampolskiy is a Latvia-born professor of Computer Science and AI Safety & Security Researcher at the University of Louisville in Kentucky US, known for his work on behavioral biometrics, security of cyberworlds, and AI safety. He is currently the director of Cyber Security Laboratory in the department of Computer Engineering and Computer Science at the Speed School of Engineering.

Yampolskiy is an author of some 100 peer-reviewed publications, including numerous books. He is an influential academic with an impressive body of work focused on the complexities of AI alignment and the existential risks associated with the current trajectory of events.

A family man and father, he often says that he cares about AI existential safety for very selfish reasons, as he doesn’t want future advanced AI to cause harm to his family, friends, community, country, planet and descendants.
He has dedicated his life to pursuing the goal of making future advanced AI globally beneficial, safe, and secure, as a superintelligence aligned with human values would be the greatest invention ever made.

His experience in AI safety and security research, spawns over 10 years of research leadership in the domain of transformational AI. A Fellow (2010) and a Research Advisor (2012) of the Machine Intelligence Research Institute (MIRI), an AI Safety Fellow (2019) of the Foresight Institute and a Research Associate (2018) of the Global Catastrophic Research Institute (GCRI). His work has been funded by NSF, NSA, DHS, EA Ventures and FLI. His early work on AI Safety Engineering, AI Containment and AI Accidents has become seminal in the field and is very well-cited.
He has given over a 100 public talks, served on program committees of multiple AI Safety conferences and journal editorial boards, has awards for teaching and service to the community and has given 100s of interviews on AI safety.
His recent research focus is on the theoretical limits to explainability, predictability and controllability of advanced intelligent systems. With collaborators, he continues his project related to analysis, handling and prediction/avoidance of AI accidents and failures. New projects related to monitorability, and forensic analysis of AI are currently in the pipeline.

Roman Yampolskiy: Dangers of Superintelligent AI | Lex Fridman Podcast #431

June 2, 2024 10:55 pm

Roman Yampolskiy is an AI safety researcher and author of a new book titled AI: Unexplainable, Unpredictable, Uncontrollable. Please support this ...

Roman Yampolskiy on Shoggoth, Scaling Laws, and Evidence for AI being Uncontrollable

February 2, 2024 5:19 pm

Roman Yampolskiy joins the podcast again to discuss whether AI is like a Shoggoth, whether scaling laws will hold for more agent-like AIs, evidence ...

Deleted video

July 7, 2024 11:48 pm

Dr Roman Yampolskiy - AI Apocalypse: Are We Doomed? A Chilling Warning For Humanity

🔥Learn How To PROFIT From AI Investing NOW!
💰To
...

Roman Yampolskiy on Objections to AI Safety

May 26, 2023 10:17 am

Roman Yampolskiy joins the podcast to discuss various objections to AI safety, impossibility results for AI, and how much risk civilization should ...

Roman Yampolskiy & Robin Hanson Discuss AI Risk

May 12, 2023 9:19 pm

Roman Yampolskiy & Robin Hanson Discuss AI Risk

Dr Roman Yampolskiy | The Case for Narrow AI

June 26, 2024 4:59 pm

*Dr Roman Yampolskiy | The Case for Narrow AI *
We discuss everything AI safety with Dr. Roman Yampolskiy. As AI technologies advance at a
...

Dr. Roman Yampolskiy Interview, Part 1: For Humanity, An AI Safety Podcast Episode #4

November 22, 2023 6:08 am

In Episode #4, John Sherman interviews Dr. Roman Yampolskiy, Director of Cyber Security Laboratory in the Department of Computer Engineering and ...

"Nationalize Big AI" Roman Yampolskiy Interview Part 2: For Humanity An AI Safety Podcast Episode #5

November 27, 2023 8:20 pm

In Episode #5 Part 2: John Sherman interviews Dr. Roman Yampolskiy, Director of the Cyber Security Lab at the University of Louisville, and renowned ...

Uncontrollable Superintelligence Dr Roman Yampolskiy Warns State Legislature

September 30, 2023 6:16 pm

Expert testifies to limits of control over advanced AI to state legislative committee.

Dr. Roman Yampolskiy Lighting talk on AI Control at MIT’s Mechanistic Interpretability Conference.

July 21, 2023 1:30 am

AI Safety and Security. AI generated summary:
The speaker discusses the challenges of creating beneficial, controllable, and safe AI and AI super
...

Keynote Speaker on Artificial Intelligence and Future of Superintelligence

May 26, 2017 7:25 pm

Need a speaker for your event? Dr. Yampolskiy delivered 100+ Keynotes. He is an author of over 100 publications including multiple journal articles ...

Roman Yampolskiy's talk at Oxford AGI Conference - Reward Function Integrity in AI Systems

May 12, 2013 7:43 pm

Oxford Winter Intelligence - Abstract: In this paper we will address an important issue of reward function integrity in artificially intelligent ...

Roman Yampolskiy Ignite Presentation Future SU potential dangers of exponential technologies

May 12, 2013 8:38 pm

My Ignite talk at Singularity University on potential dangers of exponential technologies.

The Future of AI: Too Much to Handle? With Roman Yampolskiy and 3 Dutch MPs

June 7, 2024 9:53 am

Artificial intelligence has advanced rapidly in the last years. If this rise will continue, it could be a matter of time until AI approaches, or ...

The Precautionary Principle and Superintelligence | A Conversation with Author Dr. Roman Yampolskiy

October 6, 2023 3:02 am

In this episode of Benevolent AI, safety researcher Dr. Roman Yampolskiy speaks with Host Dr. Ryan Merrill about societal concerns about controlling ...

Episode #44: “AI P-Doom Debate: 50% vs 99.999%” For Humanity: An AI Risk Podcast

September 4, 2024 3:06 pm

In Episode #44, host John Sherman brings back friends of For Humanity Dr. Roman Yamopolskiy and Liron Shapira. Roman is an influential AI safety ...

Lethal Intelligence Microblog

Blow your mind with the latest stories

Favorite Microbloggers

×