Dr Roman Vladimirovich Yampolskiy is a Latvia-born professor of Computer Science and AI Safety & Security Researcher at the University of Louisville in Kentucky US, known for his work on behavioral biometrics, security of cyberworlds, and AI safety. He is currently the director of Cyber Security Laboratory in the department of Computer Engineering and Computer Science at the Speed School of Engineering.
Yampolskiy is an author of some 100 peer-reviewed publications, including numerous books. He is an influential academic with an impressive body of work focused on the complexities of AI alignment and the existential risks associated with the current trajectory of events.
A family man and father, he often says that he cares about AI existential safety for very selfish reasons, as he doesn’t want future advanced AI to cause harm to his family, friends, community, country, planet and descendants.
He has dedicated his life to pursuing the goal of making future advanced AI globally beneficial, safe, and secure, as a superintelligence aligned with human values would be the greatest invention ever made.
His experience in AI safety and security research, spawns over 10 years of research leadership in the domain of transformational AI. A Fellow (2010) and a Research Advisor (2012) of the Machine Intelligence Research Institute (MIRI), an AI Safety Fellow (2019) of the Foresight Institute and a Research Associate (2018) of the Global Catastrophic Research Institute (GCRI). His work has been funded by NSF, NSA, DHS, EA Ventures and FLI. His early work on AI Safety Engineering, AI Containment and AI Accidents has become seminal in the field and is very well-cited.
He has given over a 100 public talks, served on program committees of multiple AI Safety conferences and journal editorial boards, has awards for teaching and service to the community and has given 100s of interviews on AI safety.
His recent research focus is on the theoretical limits to explainability, predictability and controllability of advanced intelligent systems. With collaborators, he continues his project related to analysis, handling and prediction/avoidance of AI accidents and failures. New projects related to monitorability, and forensic analysis of AI are currently in the pipeline.