Max Tegmark

Max Tegmark is a professor doing AI and physics research at MIT (Massachusetts Institute of Technology) as part of the Institute for Artificial Intelligence & Fundamental Interactions and the Center for Brains, Minds and Machines.
He is the author of over 300 publications as well as the New York Times bestsellers “Life 3.0: Being Human in the Age of Artificial Intelligence” and “Our Mathematical Universe: My Quest for the Ultimate Nature of Reality”.
His most recent AI safety research focuses on mechanistic interpretability and guaranteed safe AI and news bias detection with machine-learning.
He is a Fellow of the American Physical Society and holds a gold medal from the Royal Swedish Academy of Engineering Science.
Max is also a serial founder of non-profits and the president of the Future of Life Institute (whose focus is on the potential risk to humanity from the development of human-level or superintelligent artificial general intelligence (AGI), but also works to mitigate risk from biotechnology, nuclear weapons and global warming).

Known as Mad Max for his ahead-of-their time ideas and passion for adventure, his scientific interests range from artificial intelligence to quantum physics and the nature of reality.

Max is one of the most outspoken and influential academics raising the alarm on the cliff we are racing towards. Frequently present in experts forums, advising government officials and constantly working to educate the general public and update the global consciousness.
Time Magazine named him one of the 100 Most Influential People in AI 2023.

An optimist by nature, he never misses an opportunity to highlight how unimaginably wonderful and beautiful the future can be, if we don’t completely delete it by recklessly rushing to get to AGI a few years earlier. Conquering sickness, death, expanding to the stars and filling our galaxy with intelligence and meaningful experiences, it’s all within our grasp, if only we could coordinate, catch our breath, stop chasing blindly some extra dollar and break ourselves free from hubris.
His message is simple: Slow down a bit on the capabilities, so that we can catchup on alignment and reap the benefits for eternity to come.

Myths and Facts About Superintelligent AI

August 29, 2017 3:33 pm

Join the AI conversation: http://AgeofAI.org
This video was based on Max’s book "Life 3.0”, which you can find at: http://amzn.to/2iEwe6w

Support MinutePhysics on Patreon! http://www.patreon.com/minutephysics
Link to Patreon Supporters: http://www.minutephysics.com/supporters/

We live in an era of self driving cars, autonomous drones, deep learning algorithms, computers that beat humans at chess and go, and so on. So it’s natural to ask, will artificial superintelligence replace humans, take our jobs, and destroy human civilization? Or will AI just become tools like regular computers. AI researcher Max Tegmark helps explain the myths and facts about superintelligence, the impending machine takeover, etc.

MinutePhysics is on twitter - @minutephysics
And facebook - http://facebook.com/minutephysics
And Google+ (does anyone use this any more?) - http://bit.ly/qzEwc6

Minute Physics provides an energetic and entertaining view of old and new problems in physics -- all in a minute!

Created by Henry Reich
...

How to Keep AI Under Control | Max Tegmark | TED

November 2, 2023 1:00 pm

The current explosion of exciting commercial and open-source AI is likely to be followed, within a few years, by creepily superintelligent AI – which top researchers and experts fear could disempower or wipe out humanity. Scientist Max Tegmark describes an optimistic vision for how we can keep AI under control and ensure it's working for us, not the other way around.

If you love watching TED Talks like this one, become a TED Member to support our mission of spreading ideas: https://ted.com/membership

Follow TED!
Twitter: https://twitter.com/TEDTalks
Instagram: https://www.instagram.com/ted
Facebook: https://facebook.com/TED
LinkedIn: https://www.linkedin.com/company/ted-conferences
TikTok: https://www.tiktok.com/@tedtoks

The TED Talks channel features talks, performances and original series from the world's leading thinkers and doers. Subscribe to our channel for videos on Technology, Entertainment and Design — plus science, business, global issues, the arts and more. Visit https://TED.com to get our entire library of TED Talks, transcripts, translations, personalized talk recommendations and more.

Watch more: https://go.ted.com/maxtegmark23

https://youtu.be/xUNx_PxNHrY

TED's videos may be used for non-commercial purposes under a Creative Commons License, Attribution–Non Commercial–No Derivatives (or the CC BY – NC – ND 4.0 International) and in accordance with our TED Talks Usage Policy: https://www.ted.com/about/our-organization/our-policies-terms/ted-talks-usage-policy. For more information on using TED for commercial purposes (e.g. employee learning, in a film or online course), please submit a Media Request at https://media-requests.ted.com

#TED #TEDTalks #ai
...

Max Tegmark | On superhuman AI, future architectures, and the meaning of human existence

May 20, 2024 2:03 pm

This conversation between Max Tegmark and Joel Hellermark was recorded in April 2024 at Max Tegmark’s MIT office. An edited version was premiered at Sana AI Summit on May 15 2024 in Stockholm, Sweden.

Max Tegmark is a professor doing AI and physics research at MIT as part of the Institute for Artificial Intelligence & Fundamental Interactions and the Center for Brains, Minds, and Machines. He is also the president of the Future of Life Institute and the author of the New York Times bestselling books Life 3.0 and Our Mathematical Universe. Max’s unorthodox ideas have earned him the nickname “Mad Max.”

Joel Hellermark is the founder and CEO of Sana. An enterprising child, Joel taught himself to code in C at age 13 and founded his first company, a video recommendation technology, at 16. In 2021, Joel topped the Forbes 30 Under 30. This year, Sana was recognized on the Forbes AI 50 as one of the startups developing the most promising business use cases of artificial intelligence.

Timestamps
From cosmos to AI (00:00:00)
Creating superhuman AI (00:05:00)
Superseding humans (00:09:32)
State of AI (00:12:15)
Self-improving models (00:16:17)
Human vs machine (00:18:49)
Gathering top minds (00:19:37)
The “bananas” box (00:24:20)
Future Architecture (00:26:50)
AIs evaluating AIs (00:29:17)
Handling AI safety (00:35:41)
AI fooling humans? (00:40:11)
The utopia (00:42:17)
The meaning of life (00:43:40)

Follow Sana
X - https://x.com/sanalabs
LinkedIn - https://www.linkedin.com/company/sana-labs
Instagram - https://www.instagram.com/sanalabs/
Try Sana AI for free - https://sana.ai
...

AI extinction threat is ‘going mainstream’ says Max Tegmark

May 30, 2023 8:59 pm

We speak to Max Tegmark, a professor at MIT and a signatory to the warning of the dangers of AI as an extinction level threat to humanity.

Some of the world's most influential tech geniuses and entrepreneurs say artificial intelligence poses a genuine threat to the future of civilization.

Having lobbed the ball firmly in the court of global leaders and lawmakers the question is: will they have any idea what to do about it?

Max Tegmark and Tony Cohn - professor of automated reasoning at the University of Leeds - discuss both the risks and the potential rewards of the AI future we are moving rapidly towards.
...

Max Tegmark interview: Six months to save humanity from AI? | DW Business Special

April 14, 2023 8:47 pm

A leading expert in artificial intelligence warns that the race to develop more sophisticated models is outpacing our ability to regulate the technology. Critics say his warnings overhype the dangers of new AI models like GPT. But MIT professor Max Tegmark says private companies risk leading the world into dangerous territory without guardrails on their work. His Institute of Life issued a letter signed by tech luminaries like Elon Musk warning Silicon Valley to immediately stop work on AI for six months to unite on a safe way forward. Without that, Tegmark says, the consequences could be devastating for humanity.

#ai #chatgpt #siliconvalley

Subscribe: https://www.youtube.com/user/deutschewelleenglish?sub_confirmation=1

For more news go to: http://www.dw.com/en/
Follow DW on social media:
►Facebook: https://www.facebook.com/deutschewellenews/
►Twitter: https://twitter.com/dwnews
►Instagram: https://www.instagram.com/dwnews
►Twitch: https://www.twitch.tv/dwnews_hangout
Für Videos in deutscher Sprache besuchen Sie: https://www.youtube.com/dwdeutsch
...

Why You Should Fear AI Even If It's Not Conscious - Max Tegmark #shorts

September 14, 2023 8:30 pm

Max Tegmark argues that we should fear advanced AI even if it lacks consciousness or subjective experience. He explains that AI's capabilities and goals are what matter, not its internal mental state. Tegmark warns that artificial general intelligence (AGI) could soon surpass human intelligence across all domains, posing an existential threat. ...

Max Tegmark explains how quickly time is running out to manage the risks posed by AI (7/8)

November 27, 2023 1:00 pm

Max Tegmark speaks in proposition of the motion that This House believes that Artificial Intelligence is an existential threat. Mr Tegmark is an astrophysicist and cosmologist.

Mr Tegmark gives examples of those within the AI industry who think we are not far from being outsmarted - and details the problems this will create.

This is the seventh speech of eight.

SUBSCRIBE for more speakers ► http://is.gd/OxfordUnion
SUPPORT the Oxford Union ► https://oxford-union.org/supportus
Oxford Union on Facebook: https://www.facebook.com/theoxfordunion
Oxford Union on Twitter: @OxfordUnion
Website: http://www.oxford-union.org/

ABOUT THE OXFORD UNION SOCIETY: The Oxford Union is the world's most prestigious debating society, with an unparalleled reputation for bringing international guests and speakers to Oxford. Since 1823, the Union has been promoting debate and discussion not just in Oxford University, but across the globe.

The Oxford Union is deeply grateful and encouraged by the messages of support in response to our determination to uphold free speech. During our 200 year history, many have tried to shut us down. As the effects of self-imposed censorship on university campuses, social media and the arts show no signs of dissipating, the importance of upholding free speech remains as critical today as it did when we were founded in 1823. Your support is critical in enabling The Oxford Union to continue its mission without interruption and without interference. You can support the Oxford Union here: https://oxford-union.org/supportus
...

How to get empowered, not overpowered, by AI | Max Tegmark

July 5, 2018 5:36 pm

Many artificial intelligence researchers expect AI to outsmart humans at all tasks and jobs within decades, enabling a future where we're restricted only by the laws of physics, not the limits of our intelligence. MIT physicist and AI researcher Max Tegmark separates the real opportunities and threats from the myths, describing the concrete steps we should take today to ensure that AI ends up being the best -- rather than worst -- thing to ever happen to humanity.

Check out more TED Talks: http://www.ted.com

The TED Talks channel features the best talks and performances from the TED Conference, where the world's leading thinkers and doers give the talk of their lives in 18 minutes (or less). Look for talks on Technology, Entertainment and Design -- plus science, business, global issues, the arts and more.

Follow TED on Twitter: http://www.twitter.com/TEDTalks
Like TED on Facebook: https://www.facebook.com/TED

Subscribe to our channel: https://www.youtube.com/TED
...

Max Tegmark - Provably safe AI

February 8, 2024 2:32 am

Max Tegmark - "Provably safe AI"

This presentation was delivered at the New Orleans Alignment Workshop, December 2023.

The Alignment Workshop is a series of events convening top ML researchers from industry and academia to discuss and debate topics related to AI alignment. The goal is to enable researchers to better understand potential risks from advanced AI, and strategies for solving them.

If you're a machine learning researcher interested in attending future workshops, please fill out the following expression of interest form to get notified about future events: https://airtable.com/appK578d2GvKbkbDD/pagkxO35Dx2fPrTlu/form

Find more talks on this YouTube channel, and at https://www.alignment-workshop.com/
...

Can AI discover new laws of physics? | Max Tegmark and Lex Fridman

January 22, 2021 2:00 pm

Lex Fridman Podcast full episode: https://www.youtube.com/watch?v=RL4j4KPwNGM
Please support this podcast by checking out our sponsors:
- The Jordan Harbinger Show: https://www.jordanharbinger.com/lex/
- Four Sigmatic: https://foursigmatic.com/lex and use code LexPod to get up to 60% off
- BetterHelp: https://betterhelp.com/lex to get 10% off
- ExpressVPN: https://expressvpn.com/lexpod and use code LexPod to get 3 months free

GUEST BIO:
Max Tegmark is a physicist and AI researcher at MIT.

PODCAST INFO:
Podcast website: https://lexfridman.com/podcast
Apple Podcasts: https://apple.co/2lwqZIr
Spotify: https://spoti.fi/2nEwCF8
RSS: https://lexfridman.com/feed/podcast/
Full episodes playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4
Clips playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOeciFP3CBCIEElOJeitOr41

CONNECT:
- Subscribe to this YouTube channel
- Twitter: https://twitter.com/lexfridman
- LinkedIn: https://www.linkedin.com/in/lexfridman
- Facebook: https://www.facebook.com/LexFridmanPage
- Instagram: https://www.instagram.com/lexfridman
- Medium: https://medium.com/@lexfridman
- Support on Patreon: https://www.patreon.com/lexfridman
...

Max Tegmark: The Case for Halting AI Development | Lex Fridman Podcast #371

April 13, 2023 6:25 pm

Max Tegmark is a physicist and AI researcher at MIT, co-founder of the Future of Life Institute, and author of Life 3.0: Being Human in the Age of Artificial Intelligence. Please support this podcast by checking out our sponsors:
- Notion: https://notion.com
- InsideTracker: https://insidetracker.com/lex to get 20% off
- Indeed: https://indeed.com/lex to get $75 credit

EPISODE LINKS:
Max's Twitter: https://twitter.com/tegmark
Max's Website: https://space.mit.edu/home/tegmark
Pause Giant AI Experiments (open letter): https://futureoflife.org/open-letter/pause-giant-ai-experiments
Future of Life Institute: https://futureoflife.org
Books and resources mentioned:
1. Life 3.0 (book): https://amzn.to/3UB9rXB
2. Meditations on Moloch (essay): https://slatestarcodex.com/2014/07/30/meditations-on-moloch
3. Nuclear winter paper: https://nature.com/articles/s43016-022-00573-0

PODCAST INFO:
Podcast website: https://lexfridman.com/podcast
Apple Podcasts: https://apple.co/2lwqZIr
Spotify: https://spoti.fi/2nEwCF8
RSS: https://lexfridman.com/feed/podcast/
Full episodes playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4
Clips playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOeciFP3CBCIEElOJeitOr41

OUTLINE:
0:00 - Introduction
1:56 - Intelligent alien civilizations
14:20 - Life 3.0 and superintelligent AI
25:47 - Open letter to pause Giant AI Experiments
50:54 - Maintaining control
1:19:44 - Regulation
1:30:34 - Job automation
1:39:48 - Elon Musk
2:01:31 - Open source
2:08:01 - How AI may kill all humans
2:18:32 - Consciousness
2:27:54 - Nuclear winter
2:38:21 - Questions for AGI

SOCIAL:
- Twitter: https://twitter.com/lexfridman
- LinkedIn: https://www.linkedin.com/in/lexfridman
- Facebook: https://www.facebook.com/lexfridman
- Instagram: https://www.instagram.com/lexfridman
- Medium: https://medium.com/@lexfridman
- Reddit: https://reddit.com/r/lexfridman
- Support on Patreon: https://www.patreon.com/lexfridman
...

Interviews and Talks

Industry Leaders and Notable Public Figures

Explainers

Learn about the issue by some of the best explainers out there

Lethal Intelligence Microblog

Blow your mind with the latest stories

Receive important updates!

Your email will not be shared with anyone and won’t be used for any reason besides notifying you when we have important updates or new content

×