Jaan Tallinn

Jaan Tallinn is an Estonian billionaire computer programmer and investor known for his participation in the development of Skype and file-sharing application FastTrack/Kazaa.

He is a leading figure in the field of existential risk, having co-founded both the Centre for the Study of Existential Risk (CSER) at the University of Cambridge, in the United Kingdom and the Future of Life Institute in Cambridge, Massachusetts, in the United States.
He was an early investor and board member at DeepMind (later acquired by Google) and various other artificial intelligence companies (Anthropic, Conjecture etc).
Jaan is on the Board of the Center for AI Safety (safe.ai), on the Board of Sponsors of the Bulletin of the Atomic Scientists (thebulletin.org), and has previously served on the High-Level Expert Group on AI at the European Commission, as well as on the Estonian President’s Academic Advisory Board. 

A father of five, a brilliant programmer, a business prodigy and a self-made billionaire, he is one of the most important driving forces steering humanity towards a good future. Listening to him talking about AI x-risk is heart-breaking, alarming and motivates me deeply.

Jaan Tallinn argues that the extinction risk from AI is not just possible, but imminent (3/8)

November 27, 2023 1:00 pm

Jaan Tallinn speaks in proposition of the motion that AI poses an existential threat. Mr Tallinn is a computer programmer and investor who contributed to the development of Skype. He co-founded the Centre for the Study of Existential Risk and the Future of Life Institute at Cambridge.

This is the third speech of eight.

SUBSCRIBE for more speakers ► http://is.gd/OxfordUnion
SUPPORT the Oxford Union ► https://oxford-union.org/supportus
Oxford Union on Facebook: https://www.facebook.com/theoxfordunion
Oxford Union on Twitter: @OxfordUnion
Website: http://www.oxford-union.org/

ABOUT THE OXFORD UNION SOCIETY: The Oxford Union is the world's most prestigious debating society, with an unparalleled reputation for bringing international guests and speakers to Oxford. Since 1823, the Union has been promoting debate and discussion not just in Oxford University, but across the globe.

The Oxford Union is deeply grateful and encouraged by the messages of support in response to our determination to uphold free speech. During our 200 year history, many have tried to shut us down. As the effects of self-imposed censorship on university campuses, social media and the arts show no signs of dissipating, the importance of upholding free speech remains as critical today as it did when we were founded in 1823. Your support is critical in enabling The Oxford Union to continue its mission without interruption and without interference. You can support the Oxford Union here: https://oxford-union.org/supportus
...

Artificial Intelligence: The Journey and the Risks with Jaan Tallinn

November 30, 2023 8:00 am

#HybridMindsPodcast #AI #Cognaize
Join us on this fascinating journey as we sit down with Jaan Tallinn, the founder of Skype and Kazaa, to explore the groundbreaking world of AI and its potential implications for society and businesses. Listen in as we tackle the difference between summoning AI and aliens and discuss how it impacts our ability to control the outcomes of AI development. We also delve into the idea of computational universality, the Church-Turing thesis, and how AI is advancing rapidly due to the need for significant computational resources.

Additionally, we ponder if aligning AI development more closely with the functioning of our brain could lead to a decrease in the computational power currently required for AI. The conversation doesn't stop there, though. We venture into an examination of the three components that can potentially increase AI power and how context learning allows a model to modify its behavior according to a given context. The risks associated with AI's black box nature, the difficulty of predicting how AI might act in the future, the public's attitude towards AI, its potential economic implications, and the increasing leverage of technology are all on the table. Jaan shares his insights on these critical topics as we underscore the fact that the number of futures that contain humans is a small target as technology advances.

Lastly, we discuss the potential implications of AI and how it differs from the human brain. Jaan provides intriguing insights into the need for regulation and the potential pitfalls of having one company control the compute. We debate the pros and cons of constraining AI experiments and consider the potential risks of centralization versus existential risks. Don't miss out on this illuminating conversation with Jaan Tallinn as we traverse the captivating world of AI.

--
(00:24) - AI's Risks and Opportunities
(13:34) - AI Advancements, Risks, and Regulation
(20:13) - Neural Networks and the Need for Regulation
(25:18) - Centralization Risks Versus Existential Risks

--
"Hybrid Minds: Unlocking The Power of AI + IQ" Podcast
Learn more about Jaan Tallinn: https://www.cser.ac.uk/team/jaan-tallinn/
Learn more about the Future of Life Institute: https://futureoflife.org/
Connect with Vahe: https://www.linkedin.com/in/vaheandonians/?originalSubdomain=de
Check out all things Cognaize: https://www.cognaize.com/
https://www.cognaize.com/podcast
...

Jaan Tallinn: The Big Risks in Artificial Intelligence

June 5, 2020 12:55 am

Founder, investor, and philanthropist Jaan Tallinn joins Wolf Tivy and Ash Milton to discuss the frontier of artificial intelligence research and what an A.I. future means for humanity.

BECOME A PALLADIUM MEMBER: https://palladiummag.com/subscribe

To join the Q&A with our guests and watch Digital Salons live with early access, become a Palladium member. Live participation is limited to Palladium members.

SUBSCRIBE TO PALLADIUM: https://palladiummag.com/subscribe/

PODCAST: https://palladiummag.com/2020/06/04/digital-salon-with-jaan-tallinn-the-big-risks-in-ai/

TWITTER:
Palladium Magazine: https://twitter.com/palladiummag
Wolf Tivy: https://twitter.com/wolftivy
Ash Milton: https://twitter.com/miltonwrites

PATREON: https://www.patreon.com/palladium

Palladium Magazine is a 501(c)(3) non-profit and non-partisan journalism project. Donations to Palladium Magazine are tax-deductible in the United States. Sustaining our work and building our community is only possible thanks to the generous contributions of our members. Thank you.
...

Will AI Destroy Humanity? Jaan Tallinn on AGI, Existential Risk & AI Safety

August 4, 2023 10:04 pm

Will AI become smarter than people? Is AGI safe? Will AI destroy society? These are the kinds of existential risk questions Jaan Tallinn thinks about. #ai #artificialintelligence #existentialrisk

Jaan Tallinn is one of the founding engineers of Skype and the file-sharing service Kazaa. Between Skype and Kazaa, he's created some of the most popular software of all time, with billions of downloads. He’s also a co-founder of the Centre for the Study of Existential Risk and the Future of Life Institute.

Auren and Jaan discuss the trajectory of artificial intelligence and the existential risks it could present to humanity. Jann talks about the prevailing attitudes towards risk in AI research and what needs to change in order get aligned, safe AI.

Jaan and Auren also talk about how Jaan’s native Estonia has become one of the most tech-forward societies in Europe.

World of DaaS is brought to you by SafeGraph & Flex Capital
To learn more about SafeGraph, visit https://www.safegraph.com
To learn more about Flex Capital, visit https://www.flexcapital.com

You can find Auren Hoffman on Twitter at @auren and‍ Jaan Tallinn’s work on YouTube and at the Centre for the Study of Existential Risk @CSERCambridge and the Future of Life Institute @futureoflifeinstitute

Timestamps:
00:00 Jaan Tallinn
02:15 AI + existential threats to humanity
03:20 How to create safe AGI
04:23 How much of a threat is AI?
13:44 Commercial pressures - competition b/t Google, OpenAI
15:35 Do we *need* AGI?
20:00 Encoding values in AI
22:33 What makes you optimistic about humanity?
25:09 Thinking about existential risk
28:13 Monopolies in tech
29:40 How Estonia became one of the most tech-enabled countries
32:39 What’s it like to be part time famous?
33:36 Growing up in the USSR
36:16 Jaan’s take for common bad advice


Top podcast episodes:
https://www.youtube.com/watch?v=UP3A6kz2fkw
https://www.youtube.com/watch?v=QL6BgXUcb_0
https://www.youtube.com/watch?v=lKzwNXeBgjQ
https://www.youtube.com/watch?v=7t_W2TcGUZk
https://www.youtube.com/watch?v=Onkv42L2B8c
https://www.youtube.com/watch?v=uy8LH2WK94M
https://www.youtube.com/watch?v=zsMvGZmW9MI
https://www.youtube.com/watch?v=-WkynIAKVAU

Recent podcast episodes:
https://www.youtube.com/watch?v=7mW8r0iwGzM
https://www.youtube.com/watch?v=sOfWnBcBdPY
https://www.youtube.com/watch?v=cj5Id2stc28
https://www.youtube.com/watch?v=pgVoZmmez1o
https://www.youtube.com/watch?v=_qhaJO4EFEI
https://www.youtube.com/watch?v=hOsJhLL50qY
https://www.youtube.com/watch?v=jyo1rF4iEZ4
https://www.youtube.com/watch?v=p4XBbP1-MSA
...

Skype co-founder Jaan Tallinn on the dangers of AI

April 2, 2023 1:03 pm

Jaan Tallinn argues that AI systems might represent an existential threat to humanity. ...

Special: Jaan Tallinn on Pausing Giant AI Experiments

July 6, 2023 9:00 am

On this special episode of the podcast, Jaan Tallinn talks with Nathan Labenz about Jaan's model of AI risk, the future of AI development, and pausing giant AI experiments.

Timestamps:
0:00 Nathan introduces Jaan
4:22 AI safety and Future of Life Institute
5:55 Jaan's first meeting with Eliezer Yudkowsky
12:04 Future of AI evolution
14:58 Jaan's investments in AI companies
23:06 The emerging danger paradigm
26:53 Economic transformation with AI
32:31 AI supervising itself
34:06 Language models and validation
38:49 Lack of insight into evolutionary selection process
41:56 Current estimate for life-ending catastrophe
44:52 Inverse scaling law
53:03 Our luck given the softness of language models
55:07 Future of language models
59:43 The Moore's law of mad science
1:01:45 GPT-5 type project
1:07:43 The AI race dynamics
1:09:43 AI alignment with the latest models
1:13:14 AI research investment and safety
1:19:43 What a six-month pause buys us
1:25:44 AI passing the Turing Test
1:28:16 AI safety and risk
1:32:01 Responsible AI development.
1:40:03 Neuralink implant technology
...

Pausing the AI Revolution? With Technologist Jaan Tallinn

April 13, 2023 2:28 pm

Nathan Labenz dives in with Jaan Tallinn, a technologist, entrepreneur (Kazaa, Skype), and investor (DeepMind and more) whose unique life journey has intersected with some of the most important social and technological events of our collective lifetime. Jaan has since invested in nearly 180 startups, including dozens of AI application layer companies and some half dozen startup labs that focus on fundamental AI research, all in an effort to support the teams that he believes most likely to lead us to AI safety, and to have a seat at the table at organizations that he worries might take on too much risk. He's also founded several philanthropic nonprofits, including the Future of Life Institute, which recently published the open letter calling for a six-month pause on training new AI systems. In this discussion, we focused on:
- The current state of AI development and safety
- Jaan's expectations for possible economic transformation
- What catastrophic failure modes worry him most in the near term
- How big of a bullet we dodged with the training of GPT-4
- Which organizations really matter for immediate-term pause purposes
- How AI race dynamics are likely to evolve over the next couple of years

Also, check out the debut of co-host Erik's new long-form interview podcast Upstream, whose guests in the first three episodes were Ezra Klein, Balaji Srinivasan, and Marc Andreessen. This coming season will feature interviews with David Sacks, Katherine Boyle, and more. Subscribe here: https://www.youtube.com/@UpstreamwithErikTorenberg

LINKS REFERENCED IN THE EPISODE:
Future of Life's open letter: https://futureoflife.org/open-letter/pause-giant-ai-experiments/
Eliezer Yudkowsky's TIME article: https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/
Daniela and Dario Amodei Podcast: https://podcasts.apple.com/ie/podcast/daniela-and-dario-amodei-on-anthropic/id1170991978?i=1000552976406
Zvi on the pause: https://thezvi.substack.com/p/on-the-fli-ai-risk-open-letter

TIMESTAMPS:
(0:00) Episode Preview
(1:30) Jaan's impressive entrepreneurial career and his role in the recent AI Open Letter
(3:26) AI safety and Future of Life Institute
(6:55) Jaan's first meeting with Eliezer Yudkowsky and the founding of the Future of Life Institute
(13:00) Future of AI evolution
(15:55) Sponsor: Omneky
(17:20) Jaan's investments in AI companies
(24:22) The emerging danger paradigm
(28:10) Economic transformation with AI
(33:48) AI supervising itself
(35:23) Language models and validation
(40:06) Evolution, useful heuristics, and lack of insight into selection process
(43:13) Current estimate for life-ending catastrophe
(46:09) Inverse scaling law
(54:20) Our luck given the softness of language models
(56:24) Future of Language Models
(1:01:00) The Moore’s law of mad science
(1:03:02) GPT-5 type project
(1:09:00) The AI race dynamics
(1:11:00) AI alignment with the latest models
(1:14:31) AI research investment and safety
(1:21:00) What a six month pause buys us
(1:27:01) AI’s Turing Test Passing
(1:29:33) AI safety and risk
(1:33:18) Responsible AI development.
(1:41:20) Neuralink implant technology

TWITTER:
@CogRev_Podcast
@labenz (Nathan)
@eriktorenberg (Erik)

Thank you Omneky for sponsoring The Cognitive Revolution. Omneky is an omnichannel creative generation platform that lets you launch hundreds of thousands of ad iterations that actually work, customized across all platforms, with a click of a button. Omneky combines generative AI and real-time advertising data. Mention "Cog Rev" for 10% off.


More show notes and reading material released in our Substack: https://cognitiverevolution.substack.com/

Music Credit: OpenAI's Jukebox
...

The Existential Risks of Artificial Intelligence - Jaan Tallinn

November 17, 2023 8:28 pm

Tallinn, investor, expert and philanthropist, discusses the potential threats posed by AI. He emphasizes the urgency of addressing these risks, considering the rapid advancements in AI technology. Tallinn explores various scenarios, from AI-driven extinction to dystopian outcomes, and the importance of international regulation in this field. His insights into the balance between AI development and ethical considerations make this talk a must-watch for anyone interested in the future of AI and its impact on society.

🔹 Key Highlights:

Discussion on AI's existential risks and potential threats
The urgency of addressing AI advancements
The role of international regulation in AI development
Insights from a leading expert in the field
...

Jaan Tallinn WCCI 2024 Open Forum on AI Governance

July 8, 2024 1:04 pm

‘Fireside Chat’ with Ryota Kanai

Biography: Jaan Tallinn is a founding engineer of Skype and Kazaa. He is a co-founder of the Cambridge Centre for the Study of Existential Risk (cser.org), the Future of Life Institute (futureoflife.org), and philanthropically supports other organisations tackling existential and catastropic risk. Jaan serves on the AI Advisory Body at the United Nations, on the Board of the Center for AI Safety (safe.ai), and the Board of Sponsors of the Bulletin of the Atomic Scientists (thebulletin.org). He has previously served on the High-Level Expert Group on AI at the European Commission, as well as on the Estonian President’s Academic Advisory Board. He is also an active angel investor (metaplanet.com), a partner at Ambient Sound Investments (asi.ee), and a former investor director of the AI company DeepMind (deepmind.google).
...

Jaan Tallinn Interview ╏ Risks of AI, lack of AI regulations, and how to choose investments

October 27, 2022 2:31 pm

Jaan Tallinn was among the creators of Skype. Today he is a leading figure in the field of existential risk, having co-founded both the Centre for the Study of Existential Risk and the Future of Life Institute. He is also an active early-stage investor and one of the first investors in WISNO where he is also a member of the advisory board. Jaan Tallinn met the co-founder of Wisnio Tõnis Arro to share his thoughts about AI, existential risks, investments, startups and data-based decisions. The meeting took place in October 2022 in Tallinn, Estonia.

Our interviewer is Tõnis Arro, Co-Founder of Wisnio.

📍Jaan Tallinn and Tõnis Arro discuss the following topics:
00:00 - 01:46 What kind of principles he uses to invest?
01:46 - 03:11 How many investments are related to AI?
03:11 - 04:19 What is the biggest thing that happened in the AI world recently?
04:19 - 05:47 Why are you worried about machines making better decisions than humans?
05:47 - 06:51 Jaan Tallinn's existential risk research centres.
06:51 - 08:33 Is there enough attention to AI risks?
08:33 - 10:38 What should governments or other institutions do about AI risks?
10:38- 11:49 What should we do to regulate the data centres?
11:49 - 14:15 What makes Startups successful?
14:15 - 17:21 Why do Startups fail?
17:21 - 23:40 Best of Daniel Kahneman's writings on fast and slow thinking
23:40 - 24:28 Biases in decision-making
24:28 - 26:01 Jaan Tallinn book recommendations

📚Jaan Tallinn book recommendations:
- https://www.amazon.com/Elephant-Brain-Hidden-Motives-Everyday/dp/0190495995
- https://www.amazon.com/Thinking-Fast-Slow-Daniel-Kahneman/dp/0374533555
- https://www.amazon.com/The-Expanse-9-book-series/dp/B09DD17H3N
- https://www.amazon.com/Project-Hail-Mary-Andy-Weir/dp/0593135202
- https://www.amazon.com/Avogadro-Corp-Singularity-Closer-Appears-ebook/dp/B006ACIMQQ

______________________________________________________

✅ Try Wisnio out for free: https://bit.ly/wisnio

🖥 Follow us on Linkedin: https://www.linkedin.com/company/wisnio
📧 or contact us at [email protected]
...

Jaan Tallinn on existential risks from advanced technologies - Tech & Society - Episode 01

May 24, 2021 7:43 pm

Jaan Tallinn, philanthropist and programmer, a founding engineer of Skype and Kazaa, is a special guest at the working meeting of Tech & Society Communication Group. Olena Boytsun, the founder of the Group, moderated the discussion about the current state of AI development around the world, the definition of existential risk and its minimisation, regulation of the technological field and the role of society in these processes. The conversation is followed by Q&A from the members.

Read an interview with Jaan Tallinn by Olena Boytsun:
English - https://tech-and-society.group/existential_risks_yaan_tallin_en
Ukrainian - https://tech-and-society.group/existential_risks_yaan_tallin

The main goal of the Tech & Society group is to create a communication platform for discussing a wide range of issues related to the impact of the technological progress on the society, more at http://tech-and-society.group
...

Toy Model of the AI Control Problem

April 1, 2024 7:04 pm

Slides by Jaan Tallinn
Voiceover explanation by Liron Shapira

Would a superintelligent AI have a survival instinct?
Would it intentionally deceive us?
Would it murder us?

Doomers who warn about these possibilities often get accused of having "no evidence", or just "anthropomorphizing". It's understandable why people could assume that, because superintelligent AI acting on the physical world is such a complex topic, and they're confused about it themselves.

So instead of Artificial Superintelligence (ASI), let's analyze a simpler toy model that leaves no room for anthropomorphism to creep in: an AI that's simply a brute-force search algorithm over actions in a simple gridworld.

Why does the simplest AI imaginable, when you ask it to help you push a box around a grid, suddenly want you to die? ☠️

This toy model will help you understand why a drive to eliminate humans is *not* a handwavy anthropomorphic speculation, but something we expect by default from any sufficiently powerful search algorithm.
...

Interviews and Talks

Industry Leaders and Notable Public Figures

Explainers

Learn about the issue by some of the best explainers out there

Lethal Intelligence Microblog

Blow your mind with the latest stories

Favorite Microbloggers

Receive important updates!

Your email will not be shared with anyone and won’t be used for any reason besides notifying you when we have important updates or new content

×