Yoshua Bengio

Yoshua Bengio is recognized worldwide as one of the leading experts in artificial intelligence, known for his conceptual and engineering breakthroughs in artificial neural networks and deep learning.

In 2022, Yoshua Bengio became the most cited computer scientist in the world (h-index). He is the 2018 laureate of the A.M. Turing Award, “the Nobel Prize of Computing,” alongside Geoffrey Hinton and Yann LeCun for their important contributions and advances in deep learning. In 2022, he was appointed Knight of the Legion of Honor by France and named co-laureate of Spain’s Princess of Asturias Award for technical and scientific research. In 2023, Yoshua Bengio was appointed a Member of the UN’s Scientific Advisory Board for Independent Advice on Breakthroughs in Science and Technology.

He is a Full Professor in the Department of Computer Science and Operations Research at Université de Montréal and the Founder and Scientific Director of Mila – Quebec Artificial Intelligence Institute, one of the largest academic institutes in deep learning and one of the three federally-funded centers of excellence in AI research and innovation in Canada.

He began his studies in Montreal, where he obtained his Ph.D. in Computer Science from McGill University in 1991. After completing a postdoctoral fellowship at the Massachusetts Institute of Technology (MIT) on statistical learning and sequential data, he completed a second postdoc at AT&T Bell Laboratories, in Holmdel, NJ, on learning and vision algorithms in 1992-1993. In September 1993,  he returned to Montreal and joined UdeM as a faculty member. 

In 2016, he became the Scientific Director of IVADO. He is Co-Director of the CIFAR Learning in Machines & Brains program that funded the initial breakthroughs in deep learning and since 2019, holds a Canada CIFAR AI Chair and is Co-Chair of Canada’s Advisory Council on AI. 

Concerned about the social impact of AI, he actively took part in the conception of the Montreal Declaration for the Responsible Development of Artificial Intelligence. His goal is to contribute to uncovering the principles giving rise to intelligence through learning while favouring the development of AI for the benefit of all.

Yoshua Bengio was made an Officer of the Order of Canada and a Fellow of the Royal Society of Canada in 2017 and in 2020, became a Fellow of the Royal Society of London. From 2000 to 2019, he held the Canada Research Chair in Statistical Learning Algorithms. He is a member of the NeurIPS Foundation advisory board and Co-Founder of the ICLR conference.

His scientific contributions have earned him numerous awards, including the 2019 Killam Prize for Natural Sciences, the 2017 Government of Québec Marie-Victorin Award, the 2018 Lifetime Achievement Award from the Canadian AI Association, the Prix d’excellence FRQNT (2019), the Medal of the 50th Anniversary of the Ministry of International Relations and Francophonie (2018), the 2019 IEEE CIS Neural Networks Pioneer Award, Acfas’s Urgel-Archambault Prize (2009) and in 2017, he was named Radio-Canada’s Scientist of the Year.

The Ghost in the Machine 👻👻👻

Yoshua Bengio on Dissecting The Extinction Threat of AI

July 7, 2023 12:00 am

Yoshua Bengio, the legendary AI expert, will join us for Episode 128 of Eye on AI podcast. In this episode, we delve into the unnerving question: Could the rise of a superhuman AI signal the downfall of humanity as we know it?

Join us as we embark on an exploration of the existential threat posed by superhuman AI, leaving no stone unturned. We dissect the Future of Life Institute’s role in overseeing large language model development. As well as the sobering warnings issued by the Centre for AI Safety regarding artificial general intelligence. The stakes have never been higher, and we uncover the pressing need for action.

Prepare to confront the disconcerting notion of society’s gradual disempowerment and an ever-increasing dependency on AI. We shed light on the challenges of extricating ourselves from this intricate web, where pulling the plug on AI seems almost impossible. Brace yourself for a thought-provoking discussion on the potential psychological effects of realizing that our relentless pursuit of AI advancement may inadvertently jeopardize humanity itself.

In this episode, we dare to imagine a future where deep learning amplifies system-2 capabilities, forcing us to develop countermeasures and regulations to mitigate associated risks.

We grapple with the possibility of leveraging AI to combat climate change, while treading carefully to prevent catastrophic outcomes.

But that’s not all. We confront the notion of AI systems acting autonomously, highlighting the critical importance of stringent regulation surrounding their access and usage.

00:00 Preview
00:42 Introduction
03:30 Yoshua Bengio's essay on AI extinction
09:45 Use cases for dangerous uses of AI
12:00 Why are AI risks only happening now?
17:50 Extinction threat and fear with AI & climate change
21:10 Super intelligence and the concerns for humanity
15:02 Yoshua Bengio research in AI safety
29:50 Are corporations a form of artificial intelligence?
31:15 Extinction scenarios by Yoshua Bengio
37:00 AI agency and AI regulation
40:15 Who controls AI for the general public?
45:11 The AI debate in the world

Craig Smith Twitter: https://twitter.com/craigss

Eye on A.I. Twitter: https://twitter.com/EyeOn_AI

Found is a show about founders and company-building that features the change-makers and innovators who are actually doing the work. Each week, TechCrunch Plus reporters, Becca Szkutak and Dom-Madori Davis talk with a founder about what it’s really like to build and run a company—from ideation to launch. They talk to founders across many industries and their conversations often lead back to AI as many startups start to implement AI into what they do. New episodes of Found are published every Tuesday and you can find them wherever you listen to podcasts.

Found podcast: https://podlink.com/found
...

Why a Forefather of AI Fears the Future

April 19, 2024 10:00 pm

A renowned AI pioneer explores humanity's possible futures in a world populated with ever more sophisticated mechanical minds.

This program is part of the Big Ideas series, supported by the John Templeton Foundation.

Participants:
Yoshua Bengio

Moderator:
Brian Greene

WSF Landing Page: https://www.worldsciencefestival.com/programs/why-a-forefather-of-ai-fears-the-future/

- SUBSCRIBE to our YouTube Channel and "ring the bell" for all the latest videos from WSF
- VISIT our Website: http://www.worldsciencefestival.com
- LIKE us on Facebook: https://www.facebook.com/worldsciencefestival
- FOLLOW us on Twitter: https://twitter.com/WorldSciFest
#worldsciencefestival #artificialintelligence #quantumcomputers
...

AI Sentience, Agency and Catastrophic Risk with Yoshua Bengio - 654

November 6, 2023 10:51 pm

Today we’re joined by Yoshua Bengio, professor at Université de Montréal. In our conversation with Yoshua, we discuss AI safety and the potentially catastrophic risks of its misuse. Yoshua highlights various risks and the dangers of AI being used to manipulate people, spread disinformation, cause harm, and further concentrate power in society. We dive deep into the risks associated with achieving human-level competence in enough areas with AI, and tackle the challenges of defining and understanding concepts like agency and sentience. Additionally, our conversation touches on solutions to AI safety, such as the need for robust safety guardrails, investments in national security protections and countermeasures, bans on systems with uncertain safety, and the development of governance-driven AI systems.

🔔 Subscribe to our channel for more great content just like this: https://youtube.com/twimlai?sub_confirmation=1


🗣️ CONNECT WITH US!
===============================
Subscribe to the TWIML AI Podcast: https://twimlai.com/podcast/twimlai/
Join our Slack Community: https://twimlai.com/community/
Subscribe to our newsletter: https://twimlai.com/newsletter/
Want to get in touch? Send us a message: https://twimlai.com/contact/


📖 CHAPTERS
===============================
00:00 - Catching up with Yoshua
04:26 - Bayesian learning and causal theories
07:13 - AI risks
13:34 - Human-level competence in AGI
15:13 - Power concentration
16:55 - Risks in LLMs
21:17 - Dangers of AGI
23:47 - Sentience and agency
25:13 - Relating agency and goal conditioning in RL
28:24 - Sentience
30:30 - Difference between sentience and consciousness
36:33 - Social impacts of AI
40:30 - Approaching risks from a research perspective
44:12 - AI safety solutions
49:26 - Letter on moratorium on AI research
50:35 - Knowledge gaps in AI
54:35 - Potential risks of unintended AI behavior
56:20 - Pointer and resources


🔗 LINKS & RESOURCES
===============================
Flow Network based Generative Models for Non-Iterative Diverse Candidate Generation - https://papers.nips.cc/paper_files/paper/2021/file/e614f646836aaed9f89ce58e837e2310-Paper.pdf
FAQ on Catastrophic AI Risks (Superhuman AI) - https://yoshuabengio.org/2023/06/24/faq-on-catastrophic-ai-risks/
Pushing Back on AI Hype with Alex Hanna - 649 - https://twimlai.com/podcast/twimlai/pushing-back-on-ai-hype/
The Montreal Declaration for a Responsible Development of Artificial Intelligence - https://recherche.umontreal.ca/english/strategic-initiatives/montreal-declaration-for-a-responsible-ai/
My testimony in front of the U.S. Senate – The urgency to act against AI threats to democracy, society and national security - July 2023 - https://yoshuabengio.org/2023/07/25/my-testimony-in-front-of-the-us-senate/
Pause Giant AI Experiments: An Open Letter (Letter on Moratorium on AI Research) - https://futureoflife.org/open-letter/pause-giant-ai-experiments/

For a COMPLETE LIST of links and references, head over to https://twimlai.com/go/654.


📸 Camera: https://amzn.to/3TQ3zsg
🎙️Microphone: https://amzn.to/3t5zXeV
🚦Lights: https://amzn.to/3TQlX49
🎛️ Audio Interface: https://amzn.to/3TVFAIq
🎚️ Stream Deck: https://amzn.to/3zzm7F5
...

Eric Schmidt and Yoshua Bengio Debate How Much A.I. Should Scare Us

April 24, 2024 10:37 pm

Two top artificial intelligence experts—one an optimist and the other more alarmist about the technology’s future—engaged in a spirited debate at the TIME100 Summit.

Read more: https://ti.me/49TkilV

Subscribe to TIME Breaking News YouTube Channel ►►: https://ti.me/3ROMUXY
Subscribe to TIME’s YouTube channel ►► http://ti.me/subscribe-time
Subscribe to TIME: https://ti.me/3E3UCqt
Get the day’s top headlines to your inbox, curated by TIME editors: https://ti.me/48dFNwQ
Follow us:
X (Twitter): https://ti.me/3xTVwSk
Facebook: https://ti.me/3xWI2Fg
Instagram: https://ti.me/3dO9Rcc
...

MEGATHREAT: The Dangers Of AI Are WEIRDER Than You Think! | Yoshua Bengio

April 13, 2023 3:00 pm

RESTART your life in 7 days: http://bit.ly/42KM8OR

Click here to download your FREE guide to 100x YOUR EFFICIENCY IN 10 EASY STEPS: https://bit.ly/3F8qOJL

On Today's Episode:
The launch of ChatGPT broke records in consecutive months between December 2022 and February 2023. With over 1 billion visits a month for ChatGPT, over 100,000 users and $45 million in revenue for Jasper A.I., the race to adopting A.I. at scale has begun.

Does the global adoption of artificial intelligence have you concerned or apprehensive about what’s to come?

On one hand it’s easy to get caught up in the possibilities of co-existing with A.I. living the enhanced upgraded human experience. We already have tech and A.I. integrated into so many of our daily habits and routines: Apple watches, ora rings, social media algorithms, chat bots, and on and on.

Yoshua Bengio has dedicated more than 30 years of his computer science career to deep learning. He’s an award winning computer scientist known for his breakthroughs in artificial neural networks. Why after 3 decades contributing to the advancement of A.I. systems is Yoshua now calling to slow down the development of powerful A.I. systems?

This conversation is about being open-minded and aware of the dangers of AI we all need to consider from the perspective of one of the world’s leading experts in artificial intelligence.

Conscious computers, A.I. trolls, and the evolution of machines and what it means to be a neural network are just a few of the things you’ll find interesting in this conversation.

QUOTES:

“We need to be maybe much more careful and provide much more of guidance and guardrails in regulation, to minimize potential harm that could come out of more and more powerful systems.”

“I would say misinformation, disinformation is the greatest large-scale danger.”

“With AI becoming more powerful, I think it's time to really accelerate that process of regulating to protect the public, and society.”

This episode also contains a relevant conversation with Bret Weinstein & Heather Heying about the changes humans must make in society in order to avoid a total collapse
https://youtu.be/vm-4HS2khTw

Some people are more motivated by bad news while others are paralyzed in fear. Bret Weinstein and Heather Heying are two Evolutionary Biologists that give you the facts, discoveries and observations as they see it. Tom reads a direct quote from their latest book that says, “We are headed for collapse. Civilization is becoming incoherent around us.” Bret and Heather break down fundamental ideas behind what humans have learned from evolution for thousands of years, how corrupt the science community has become, and how to parent your children with the goal of mitigating risks while they are younger. These biologists are incredibly fascinating with novel ideas of what adaptation would look like for humans if we are careful and intentional.
...

Interviews and Talks

Industry Leaders and Notable Public Figures

Explainers

Learn about the issue by some of the best explainers out there

Lethal Intelligence Microblog

Blow your mind with the latest stories

Favorite Microbloggers

Receive important updates!

Your email will not be shared with anyone and won’t be used for any reason besides notifying you when we have important updates or new content

×