Connor Leahy

Connor Leahy is an German-American entrepreneur and artificial intelligence researcher. He is best known as a co-founder and co-lead of EleutherAI, a grassroots non-profit organization focused on advancing open-source artificial intelligence research. 

A talented coder, he famously reverse-engineered OpenAI’s large language model (LLM) GPT-2 in 2019.

In 2022 he co-founded Conjecture, which he currently leads as the CEO.
Conjecture’s mission is to make a major breakthrough in the AI Alignment problem, by creating a new paradigm for infrastructure that allows us to build scalable, auditable, controllable AI systems (they refer to it as Cognitive Emulation).

Connor is a passionate and gifted communicator, the mainstream media love him and the academic intellectual community respects him and places him on the front of the discourse. Being an expert in the field, while also being a very eloquent speaker, positions him at a unique spot for influencing the public opinion and swaying politicians and decision makers.

His second name happens to be “Johannes”, which is the German equivalent of “John“.
For those who have watched the Terminator movies, they know John Connor was the leader of the resistance to the machines (after Skynet took over)…

Source: Kat Woods

Why this top AI guru thinks we might be in extinction level trouble | The InnerView

January 22, 2024 5:35 pm

Lauded for his groundbreaking work in reverse-engineering OpenAI's large language model, GPT-2, AI expert Connor Leahy tells Imran Garda why he is now sounding the alarm.

Leahy is a hacker, researcher and expert on Artificial General Intelligence. He is the CEO of Conjecture, a company he co-founded to focus on making AGI safe. He shares the view of many leading thinkers in the industry, including the godfather of AI Geoffrey Hinton, who fear what they have built. They argue that recent rapid unregulated research combined with the exponential growth of AGI will soon lead to catastrophe for humanity - unless an urgent intervention is made.

00:00 AI guru
02:11 What is AGI, and what are the risks it could bring?
03:51 "Nobody knows why AI does what it does"
05:41 From an AI enthusiast to advocating for a more cautious approach to AI development
07:58 What does Connor expect to happen when we lose control to AGI?
11:40 "People like Sam Altman are rockstars"
14:38 Connor's vision of a safe AI future
15:24 Imran: "One year left?"
17:26 "Normal people do have the right intuition about AI"
20:58 ChatGPT, limitations of AI, and a story about a frog
24:53 Control AI

Connor recommends we all visit Control AI:
https://controlai.com/deepfakes
Control AI / Twitter: @ai_ctrl
Control AI / TikTok: @ctrl.ai
Control AI / Instagram: @ai_ctrl

Subscribe:
http://trt.world/subscribe
Livestream: http://trt.world/ytlive
Facebook: http://trt.world/facebook
Twitter: http://trt.world/twitter
Instagram: http://trt.world/instagram
Visit our website: http://trt.world
...

Connor Leahy | This House Believes Artificial Intelligence Is An Existential Threat | CUS

October 22, 2023 7:00 pm

Connor Leahy speaks as First proposition for the motion on Thursday 12th October 2023 at 8:00pm in the Debating Chamber.

The rapid growth in the capabilities of AI have struck fear into the hearts of many, while others herald it as mankind's greatest innovation. From autonomous weapons to cancer-curing algorithms to a malicious superintelligence, we aim to discover whether AI will be the end of us or the beginning of a new era.
............................................................................................................................

Connor Leahy
Connor Leahy is the founder and CEO of AI Alignment research startup Conjecture. He is also the Co-founder and Co-lead of EleutherAI, a grassroots non-profit organisation focused on advancing open-source artificial intelligence research. He reverse-engineered OpenAI's large language model GPT-2 when he was 24 years old.

Thumbnail Photographer: Nordin Catic

............................................................................................................................

Connect with us on:

Facebook: https://www.facebook.com/TheCambridgeUnion

Instagram: https://www.instagram.com/cambridgeunion

Twitter: https://twitter.com/cambridgeunion

LinkedIn: https://www.linkedin.com/company/cambridge-union-society
...

AI researcher Connor Leahy fears extinction of human race by AI @Christiane Amanpour

May 3, 2023 10:53 am

https://twitter.com/amanpour/status/1653452034463367168?s=20

https://twitter.com/NPCollapse?s=20

https://www.conjecture.dev/
...

Conjecture CEO Connor Leahy on CNN to talk AI Regulation

September 15, 2023 1:54 pm

Conjecture CEO Connor Leahy on CNN talking about the UK's AI Safety Summit, AI regulations, in particular the potential for training run moratoriums.


Visit our website at: https://conjecture.dev/
Follow us on https://twitter.com/ConjectureAI

We've also working on a friendly AI assistant, try it for free at: https://assistant.conjecture.dev/
...

“It Could Potentially Extinct The Entire Human Species” | How BIG Are AI Risks?

November 21, 2023 9:30 am

Hundreds of employees at OpenAI have threatened to quit unless the board of the ChatGPT creator resigns and Sam Altman is reappointed as chief executive, reversing a dramatic boardroom coup.

More than 500 of OpenAI’s 770 staff have signed an open letter which says the process through which Altman and his co-founder Greg Brockman were removed on Friday has “undermined our mission and company.”

It adds: “Your conduct has made it clear you did not have the competence to oversee OpenAI.”

The letter was published online hours after Microsoft’s boss Satya Nadella tweeted that he was bringing Altman in-house to build and lead a team researching advanced artificial intelligence.

Microsoft is the biggest shareholder in OpenAI with a 49 per cent stake worth $10 billion.

TalkTV’s Rosanna Lockwood is joined by The Times' technology business editor Katie Prescott and chief executive of Conjecture Connor Leahy to discuss the story and the risks of artificial intelligence.

Connor says: “It could potentially extinct the entire human species.”

#ai #artificialintelligence #talktv #talkradio
...

AI is a Ticking Time Bomb with Connor Leahy

June 26, 2023 12:30 pm

AI is here to stay, but at what cost? Connor Leahy is the CEO of Conjecture, a mission-driven organization that’s trying to make the future of AI go as well as it possibly can. He is also a Co-Founder of EleutherAI, an open-source AI research non-profit lab.

In today’s episode, Connor and David cover:
1) The intuitive arguments behind the AI Safety debate
2) The two defining categories of ways AI could end all of humanity
3) The major players in the race towards AGI, and why they all seem to be ideologically motivated, rather than financially motivated
4) Why the progress of AI power is based on TWO exponential curves
5) Why Connor thinks government regulation is the easiest and most effective way of buying us time

------
🚀 Unlock $3,000+ in Perks with Bankless Citizenship 🚀
https://bankless.cc/GetThePerks

------
📣 CYFRIN | Smart Contract Audits & Solidity Course
https://bankless.cc/cyfrin

------
BANKLESS SPONSOR TOOLS:

🐙KRAKEN | MOST-TRUSTED CRYPTO EXCHANGE
https://k.xyz/bankless-pod-q2

🦊METAMASK LEARN | HELPFUL WEB3 RESOURCE
https://bankless.cc/MetaMask

⚖️ ARBITRUM | SCALING ETHEREUM
https://bankless.cc/Arbitrum

🧠 AMBIRE | SMART CONTRACT WALLET
https://bankless.cc/Ambire

🦄UNISWAP | ON-CHAIN MARKETPLACE
https://bankless.cc/uniswap

🛞MANTLE | MODULAR LAYER 2 NETWORK
https://bankless.cc/Mantle

-----------
TIMESTAMPS

0:00 Intro
3:12 AI Alignment Importance
9:40 Finding Neutrality
14:16 AI Doom Scenarios
21:06 How AI Misalignment Evolves
25:56 The State of AI Alignment
32:07 The AI Race Trap
41:49 Motivations of the AI Race
56:18 AI Regulation Efforts
1:14:28 How AI Regulation & Crypto Compare
1:21:44 AI Teachings of Human Coordination
1:36:53 Closing & Disclaimers

-----------
RESOURCES

Connor Leahy
https://twitter.com/NPCollapse

Conjecture Research
https://www.conjecture.dev/research/

EleutherAI Discord
https://discord.com/invite/zBGx3azzUn

Stop AGI
https://www.stop.ai/

-----------
Related Episodes:

We’re All Gonna Die with Eliezer Yudkowsky
https://www.youtube.com/watch?v=gA1sNLL6yg4

How We Prevent the AI’s from Killing us with Paul Christiano
https://www.youtube.com/watch?v=GyFkWb903aU

-----------
Not financial or tax advice. This channel is strictly educational and is not investment advice or a solicitation to buy or sell any assets or to make any financial decisions. This video is not tax advice. Talk to your accountant. Do your own research.

Disclosure. From time-to-time I may add links in this newsletter to products I use. I may receive commission if you make a purchase through one of these links. Additionally, the Bankless writers hold crypto assets. See our investment disclosures here:
https://www.bankless.com/disclosures
...

Stop the World: TSD Summit Sessions: Artificial intelligence and catastrophic risk with Connor Leahy

September 11, 2024 2:29 am

In the first video edition of The Sydney Dialogue Summit Sessions, David Wroe sits down with Connor Leahy, co-founder and CEO of Conjecture AI. David and Connor speak about the catastrophic risks that a powerful but uncontrolled and unaligned artificial superintelligence could pose to humanity, and Conjecture’s approach to safe AI called “cognitive emulation”. They also discuss what it means for an intelligent agent to have goals, and the likelihood that the current dominant AI approach of large language models can continue to be scaled up with more computing power.
Connor was a panellist at ASPI’s Sydney Dialogue cyber and tech conference held on September 2 and 3. This is the first of a series of podcasts filmed on the sidelines of the conference, which will be released in the coming weeks.

Check out ASPI’s YouTube channel here to watch the full video:
https://www.youtube.com/@ASPICanberra
...

Connor Leahy on The Risks of Centralizing AI Power

November 29, 2023 4:00 pm

This episode is sponsored by Netsuite by Oracle, the number one cloud financial system, streamlining accounting, financial management, inventory, HR, and more.

Download NetSuite’s popular KPI Checklist, designed to give you consistently excellent performance - absolutely free at https://netsuite.com/EYEONAI

On episode 158 of Eye on AI, host Craig Smith dives deep into the world of AI safety, governance, and open-source dilemmas with Connor Leahy, CEO of Conjecture, an AI company specializing in AI safety.

Connor, known for his pioneering work in open-source large language models, shares his views on the monopolization of AI technology and the risks of keeping such powerful technology in the hands of a few.

The episode starts with a discussion on the dangers of centralizing AI power, reflecting on OpenAI's situation and the broader implications for AI governance. Connor draws parallels with historical examples, emphasizing the need for widespread governance and responsible AI development. He highlights the importance of creating AI architectures that are understandable and controllable, discussing the challenges in ensuring AI safety in a rapidly evolving field.

We also explore the complexities of AI ethics, touching upon the necessity of policy and regulation in shaping AI's future. We discuss the potential of AI systems, the importance of public understanding and involvement in AI governance, and the role of governments in regulating AI development.

The episode concludes with a thought-provoking reflection on the future of AI and its impact on society, economy, and politics. Connor urges the need for careful consideration and action in the face of AI's unprecedented capabilities, advocating for a more cautious approach to AI development.

Remember to leave a 5-star rating on Spotify and a review on Apple Podcasts if you enjoyed this podcast.


Stay Updated:

Craig Smith Twitter: https://twitter.com/craigss

Eye on A.I. Twitter: https://twitter.com/EyeOn_AI

(00:00) Preview
(00:25) Netsuite by Oracle
(02:42) Introducing Connor Leahy
(06:35) The Mayak Facility: A Historical Parallel
(13:39) Open Source AI: Safety and Risks
(19:31) Flaws of Self-Regulation in AI
(24:30) Connor’s Policy Proposals for AI
(31:02) Implementing a Kill Switch in AI Systems
(33:39) The Role of Public Opinion and Policy in AI
(41:00) AI Agents and the Risk of Disinformation
(49:26) Survivorship Bias and AI Risks
(52:43) A Hopeful Outlook on AI and Society
(57:08) Closing Remarks and A word From Our Sponsors
...

e/acc Leader Beff Jezos vs Doomer Connor Leahy

February 2, 2024 11:30 pm

The world's second-most famous AI doomer Connor Leahy sits down with Beff Jezos, the founder of the e/acc movement debating technology, AI policy, and human values.

Watch behind the scenes, get early access and join the private Discord by supporting us on Patreon. We have some amazing content going up there with Max Bennett and Kenneth Stanley this week!
https://patreon.com/mlst (public discord)
https://discord.gg/aNPkGUQtc5
https://twitter.com/MLStreetTalk

As the two discuss technology, AI safety, civilization advancement, and the future of institutions, they clash on their opposing perspectives on how we steer humanity towards a more optimal path.

Leahy, known for his critical perspectives on AI and technology, challenges Jezos on a variety of assertions related to the accelerationist movement, market dynamics, and the need for regulation in the face of rapid technological advancements. Jezos, on the other hand, provides insights into the e/acc movement's core philosophies, emphasizing growth, adaptability, and the dangers of over-legislation and centralized control in current institutions.

Throughout the discussion, both speakers explore the concept of entropy, the role of competition in fostering innovation, and the balance needed to mediate order and chaos to ensure the prosperity and survival of civilization. They weigh up the risks and rewards of AI, the importance of maintaining a power equilibrium in society, and the significance of cultural and institutional dynamism.

MORE CONTENT!
Post-interview with Beff and Connor: https://www.patreon.com/posts/97905213
Pre-interview with Connor and his colleague Dan: https://www.patreon.com/posts/connor-leahy-and-97631416

This debate was mapped with the society library:
https://www.societylibrary.org/connor-beff-debates

Beff Jezos (Guillaume Verdon):
https://twitter.com/BasedBeffJezos
https://twitter.com/GillVerd

Connor Leahy:
https://twitter.com/npcollapse

TOC:
00:00:00 - Intro
00:08:14 - Society library reference
00:08:44 - Debate starts
00:10:17 - Should any tech be banned?
00:25:48 - Leaded Gasoline
00:34:06 - False vacuum collapse method?
00:40:05 - What if there are dangerous aliens?
00:42:05 - Risk tolerances
00:44:35 - Optimizing for growth vs value
00:57:47 - Is vs ought
01:07:38 - AI discussion
01:12:47 - War / global competition
01:16:11 - Open source F16 designs
01:25:46 - Offense vs defense
01:33:58 - Morality / value
01:48:43 - What would Conor do
01:55:45 - Institutions/regulation
02:31:50 - Competition vs. Regulation Dilemma
02:37:59 - Existential Risks and Future Planning
02:46:55 - Conclusion and Reflection

Pod version: https://podcasters.spotify.com/pod/show/machinelearningstreettalk/episodes/Showdown-Between-eacc-Leader-And-Doomer---Connor-Leahy--Beff-Jezos-e2far3q
...

Connor Leahy Unveils the Darker Side of AI

May 10, 2023 3:14 pm

Welcome to Eye on AI, the podcast that explores the latest developments, challenges, and opportunities in the world of artificial intelligence. In this episode, we sit down with Connor Leahy, an AI researcher and co-founder of EleutherAI, to discuss the darker side of AI.

Connor shares his insights on the current negative trajectory of AI, the challenges of keeping superintelligence in a sandbox, and the potential negative implications of large language models such as GPT4. He also discusses the problem of releasing AI to the public and the need for regulatory intervention to ensure alignment with human values.

Throughout the podcast, Connor highlights the work of Conjecture, a project focused on advancing alignment in AI, and shares his perspectives on the stages of research and development of this critical issue.

If you’re interested in understanding the ethical and social implications of AI and the efforts to ensure alignment with human values, this podcast is for you. So join us as we delve into the darker side of AI with Connor Leahy on Eye on AI.

00:00 Preview
00:48 Connor Leahy’s background with EleutherAI & Conjecture  
03:05 Large language models applications with EleutherAI
06:51 The current negative trajectory of AI 
08:46 How difficult is keeping super intelligence in a sandbox?
12:35 How AutoGPT uses ChatGPT to run autonomously 
15:15 How GPT4 can be used out of context & negatively 
19:30 How OpenAI gives access to nefarious activities 
26:39 The problem with the race for AGI 
28:51 The goal of Conjecture and advancing alignment 
31:04 The problem with releasing AI to the public 
33:35 FTC complaint & government intervention in AI 
38:13 Technical implementation to fix the alignment issue 
44:34 How CoEm is fixing the alignment issue  
53:30 Stages of research and development of Conjecture

Craig Smith Twitter: https://twitter.com/craigss

Eye on A.I. Twitter: https://twitter.com/EyeOn_AI
...

Debating the existential risk of AI, with Connor Leahy

April 17, 2024 11:48 am

Will AI kill us all?

I doubt it. In fact, I profoundly doubt it and largely believe the AI doom narrative is quite unhelpful.

However, I’m also really interested in checking my assumptions, challenging my thinking — and helping you make up your own mind. To that end, we come to my latest conversation with a guest who thinks we are (almost) doomed. 

Connor Leahy is the co-founder and CEO of Conjecture, an AI startup working on controlling AI systems and aligning them to human values. He’s also one of the most prominent voices warning of AI existential threats.

In this conversation, Connor and I discuss:

00:00 Introduction
01:05 The pause AI letter
07:00 Co-evolution of safeguards
10:05 The speed of change
22:01 Turning the safety agenda into action
30:30 Compute as a means for control
36:05 Practical approaches to AI safety
50:38 The promise of AI
57:58 Building safe and aligned AI
01:05:46 Hopes for the year to come

Where to find Connor:

Linkedin: https://www.linkedin.com/in/connor-j-leahy/

X: https://twitter.com/npcollapse

Conjecture: https://www.conjecture.dev/about

Where to find Azeem:

Website: https://www.azeemazhar.com and https://www.exponentialview.co

LinkedIn:https://www.linkedin.com/in/azhar/

YouTube: https://www.youtube.com/@AzeemExponentially

X: https://twitter.com/azeem
...

AI and Existential Risk | Robert Wright & Connor Leahy

July 26, 2023 12:08 am

Subscribe to The Nonzero Newsletter at https://nonzero.substack.com
Exclusive Overtime discussion at: https://nonzero.substack.com/p/early-access-ai-and-existential-risk

1:53 Where AI fits in Connor’s tech threat taxonomy
14:42 What does the “general” in artificial general intelligence actually mean?
21:58 What should worry us about AI right now?
29:33 Connor: Don’t put your trust in AI companies
39:00 The promise and perils of open-sourcing AI
49:24 Why "interpretability" matters
56:03 What would an aligned AI actually look like?
1:00:32 Bridging the technology wisdom gap

Robert Wright (Bloggingheads.tv, The Evolution of God, Nonzero, Why Buddhism Is True) and Connor Leahy (Conjecture). Recorded July 11, 2023.

Comments on BhTV: http://bloggingheads.tv/videos/66476
Twitter: https://twitter.com/NonzeroPods
...

Connor Leahy on a Promising Breakthrough in AI Alignment

March 3, 2024 12:36 am

Connor Leahy is one of the world's leading figures in the field of artificial intelligence (AI), known for his contributions to the understanding and development of large language models (LLMs) and his advocacy for ethical AI practices. As a co-founder of Conjecture, an organization focused on AI safety and research, Leahy emphasizes the importance of creating AI systems that are aligned with human values and can be trusted to act in humanity's best interests. His work often explores the challenges of AI alignment, the risks associated with advanced AI technologies, and the need for collaborative efforts in the AI community to ensure the safe and beneficial development of AI. Leahy is recognized for his insights into the complexities of AI ethics, the potential existential risks posed by uncontrolled AI, and the importance of rigorous research and dialogue in mitigating these risks.

Time stamp:

00:00 -- Introductory sequence
00:28 -- Introduction to the talk by Razo
02:51 -- Start of Interview
04:20 -- Leahy gives an overview of his work in AI alignment
10:05 -- Leahy talks about people who don't feel an AI threat
15:20 -- Leahy elaborates on intelligence and goals
19:55 -- Leahy on his approach to quantum physics
21:50 -- Razo on what should be the favored interpretation in science
26:04 -- Leahy on how to approach differences of opinion
38:12 -- Is Leahy optimistic about humanity's prospects?
43:50 -- Three potential scenarios for the future of humanity
48:10 -- Leahy on voting, mechanism design, and quadratic voting.
55:40 -- Leahy on what to do about human disagreement
1:01:26 -- How does Leahy reconcile his pessimism and his optimism?
1:04:35 -- Leahy on large language models and agent-based simulation
1:07:55 -- The two-state vector approach to science
1:09:15 -- Leahy on the importance of talking to politicians
1:13:55 -- Changing the assumption of 1-person 1-vote democracy
1:15:17 -- Leahy talk about the promising breakthroughs of Conjecture
1:21:55 -- Leahy on controlling AI but not fixing humanity
1:24:35 -- Where and how to follow Leahy
...

What is the Existential Risk from AI? - Pakhuis de Zwijger Special, with Conjecture's Connor Leahy

July 20, 2023 5:01 pm

https://www.existentialriskobservatory.org/events/event-ai-x-risk-and-what-to-do-about-it-10-july-18-00-pakhuis-de-zwijger-amsterdam/

What is existential risk from AI, and what do we do about it?

The development of AI has been incredible fast over the last decade. We seem to be able to keep up less and less, while the abilities of AI will soon outpace us. What do we need to do to make sure that AI will not become an existential risks? What is the latest on this, and what are decision-makers willing to do? And what is their responsibility to do something about it?
...

The Existential Risk of AI Alignment | Connor Leahy, ep 91

March 6, 2023 10:00 am

Our guest is AI researcher and founder of Conjecture, Connor Leahy, who is dedicated to studying AI alignment. Alignment research focuses on gaining an increased understanding of how to build advanced AI systems that pursue the goals they were designed for instead of engaging in undesired behavior. Sometimes, this means just ensuring they share the values and ethics we have as humans so that our machines don’t cause serious harm to humanity.


In this episode, Connor provides candid insights into the current state of the field, including the very concerning lack of funding and human resources that are currently going into alignment research. Amongst many other things, we discuss how the research is conducted, the lessons we can learn from animals, and the kind of policies and processes humans need to put into place if we are to prevent what Connor currently sees as a highly plausible existential threat.


Find out more about Conjecture at conjecture.dev or follow Connor and his work at twitter.com/NPCollapse


**


Apply for registration to our exclusive South By Southwest event on March 14th @ www.su.org/basecamp-sxsw


Apply for an Executive Program Scholarship at su.org/executive-program/ep-scholarship


Learn more about Singularity: su.org


Host: Steven Parton - LinkedIn / Twitter


Music by: Amine el Filali



Subscribe: http://bit.ly/1Wq6gwm

Connect with Singularity University:
Website: http://su.org
Podcast: https://www.su.org/podcasts
Blog: https://su.org/blog/
News: http://singularityhub.com
Facebook: https://www.facebook.com/singularityu
Twitter: https://twitter.com/singularityu
Linkedin: https://www.linkedin.com/company/singularity-university

About Singularity University:
Singularity Group is an innovation company that believes technology and entrepreneurship can solve the world’s greatest challenges.

We transform the way people and organizations think about exponential technology and the future, and enable them to create and accelerate initiatives that will deliver business value and positively impact people and the planet.

Singularity University
http://www.youtube.com/user/SingularityU
...

Connor Leahy on the State of AI and Alignment Research

April 20, 2023 6:10 pm

Connor Leahy joins the podcast to discuss the state of the AI. Which labs are in front? Which alignment solutions might work? How will the public react to more capable AI? You can read more about Connor's work at https://conjecture.dev

Timestamps:
00:00 Landscape of AI research labs
10:13 Is AGI a useful term?
13:31 AI predictions
17:56 Reinforcement learning from human feedback
29:53 Mechanistic interpretability
33:37 Yudkowsky and Christiano
41:39 Cognitive Emulations
43:11 Public reactions to AI

Social Media Links:
➡️ WEBSITE: https://futureoflife.org
➡️ TWITTER: https://twitter.com/FLIxrisk
➡️ INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/
➡️ META: https://www.facebook.com/futureoflifeinstitute
➡️ LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/
...

Connor Leahy - e/acc, AGI and the future.

April 21, 2024 4:36 pm

Connor is the CEO of Conjecture and one of the most famous names in the AI alignment movement. This is the "behind the scenes footage" and bonus Patreon interviews from the day of the Beff Jezos debate, including an interview with Daniel Clothiaux. It's a great insight into Connor's philosophy.

Support MLST:
Please support us on Patreon. We are entirely funded from Patreon donations right now. Patreon supports get private discord access, biweekly calls, very early-access + exclusive content and lots more.
https://patreon.com/mlst
Donate: https://www.paypal.com/donate/?hosted_button_id=K2TYRVPBGXVNA
If you would like to sponsor us, so we can tell your story - reach out on mlstreettalk at gmail

Topics:
Externalized cognition and the role of society and culture in human intelligence
The potential for AI systems to develop agency and autonomy
The future of AGI as a complex mixture of various components
The concept of agency and its relationship to power
The importance of coherence in AI systems
The balance between coherence and variance in exploring potential upsides
The role of dynamic, competent, and incorruptible institutions in handling risks and developing technology
Concerns about AI widening the gap between the haves and have-nots
The concept of equal access to opportunity and maintaining dynamism in the system
Leahy's perspective on life as a process that "rides entropy"
The importance of distinguishing between epistemological, decision-theoretic, and aesthetic aspects of morality (inc ref to Hume's Guillotine)
The concept of continuous agency and the idea that the first AGI will be a messy admixture of various components
The potential for AI systems to become more physically embedded in the future
The challenges of aligning AI systems and the societal impacts of AI technologies like ChatGPT and Bing
The importance of humility in the face of complexity when considering the future of AI and its societal implications

TOC:
00:00:00 Intro
00:00:56 Connor's Philosophy
00:03:53 Office Skit
00:05:08 Connor on e/acc and Beff
00:07:28 Intro to Daniel's Philosophy
00:08:35 Connor on Entropy, Life, and Morality
00:19:10 Connor on London
00:20:21 Connor Office Interview
00:20:46 Friston Patreon Preview
00:21:48 Why Are We So Dumb?
00:23:52 The Voice of the People, the Voice of God / Populism
00:26:35 Mimetics
00:30:03 Governance
00:33:19 Agency
00:40:25 Daniel Interview - Externalised Cognition, Bing GPT, AGI
00:56:29 Beff + Connor Bonus Patreons Interview

Disclaimer: this video is not an endorsement of e/acc or AGI agential existential risk from us - the hosts of MLST consider both of these views to be quite extreme. We seek diverse views on the channel.
...

Hackers expose deep cybersecurity vulnerabilities in AI | BBC News

June 27, 2024 10:57 pm

As is the case with most other software, artificial intelligence (AI) is vulnerable to hacking.

A hacker, who is part of an international effort to draw attention to the shortcomings of the biggest tech companies, is stress-testing, or “jailbreaking,” the language models at Microsoft, ChatGPT and Google, according to a recent report from the Financial Times.

Two weeks ago, Russian hackers used AI for a cyber-attack on major London hospitals, according to the former chief executive of the National Cyber Security Centre. Hospitals declared a critical incident after the ransomware attack, which affected blood transfusions and test results.

On this week’s AI Decoded, the BBC’s Christian Fraser explores the security implications of businesses that are turning to AI to improve their systems.

Subscribe here: http://bit.ly/1rbfUog

For more news, analysis and features visit: www.bbc.com/news

#Technology #AI #BBCNews
...

Connor Leahy on AI Progress, Chimps, Memes, and Markets

January 19, 2023 3:52 pm

Connor Leahy from Conjecture joins the podcast to discuss AI progress, chimps, memes, and markets. Learn more about Connor's work at https://conjecture.dev.

Timestamps:
00:00 Introduction
01:00 Defining artificial general intelligence
04:52 What makes humans more powerful than chimps?
17:23 Would AIs have to be social to be intelligent?
20:29 Importing humanity's memes into AIs
23:07 How do we measure progress in AI?
42:39 Gut feelings about AI progress
47:29 Connor's predictions about AGI
52:44 Is predicting AGI soon betting against the market?
57:43 How accurate are prediction markets about AGI?

Social Media Links:
➡️ WEBSITE: https://futureoflife.org
➡️ TWITTER: https://twitter.com/FLIxrisk
➡️ INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/
➡️ META: https://www.facebook.com/futureoflifeinstitute
➡️ LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/
...

The AI Alignment Debate: Can We Develop Truly Beneficial AI? (HQ version)

August 4, 2023 1:48 am

Patreon: https://www.patreon.com/mlst
Discord: https://discord.gg/ESrGqhf5CB

George Hotz and Connor Leahy discuss the crucial challenge of developing beneficial AI that is aligned with human values. Hotz believes truly aligned AI is impossible, while Leahy argues it's a solvable technical challenge.

Hotz contends that AI will inevitably pursue power, but distributing AI widely would prevent any single AI from dominating. He advocates open-sourcing AI developments to democratize access. Leahy counters that alignment is necessary to ensure AIs respect human values. Without solving alignment, general AI could ignore or harm humans.

They discuss whether AI's tendency to seek power stems from optimization pressure or human-instilled goals. Leahy argues goal-seeking behavior naturally emerges while Hotz believes it reflects human values. Though agreeing on AI's potential dangers, they differ on solutions. Hotz favors accelerating AI progress and distributing capabilities while Leahy wants safeguards put in place.

While acknowledging risks like AI-enabled weapons, they debate whether broad access or restrictions better manage threats. Leahy suggests limiting dangerous knowledge, but Hotz insists openness checks government overreach. They concur that coordination and balance of power are key to navigating the AI revolution. Both eagerly anticipate seeing whose ideas prevail as AI progresses.

Transcript and notes: https://docs.google.com/document/d/1smkmBY7YqcrhejdbqJOoZHq-59LZVwu-DNdM57IgFcU/edit?usp=sharing
Pod: https://podcasters.spotify.com/pod/show/machinelearningstreettalk/episodes/Can-We-Develop-Truly-Beneficial-AI--George-Hotz-and-Connor-Leahy-e27nhtg

TOC:
[00:00:00] Introduction to George Hotz and Connor Leahy
[00:03:10] George Hotz's Opening Statement: Intelligence and Power
[00:08:50] Connor Leahy's Opening Statement: Technical Problem of Alignment and Coordination
[00:15:18] George Hotz's Response: Nature of Cooperation and Individual Sovereignty
[00:17:32] Discussion on individual sovereignty and defense
[00:18:45] Debate on living conditions in America versus Somalia
[00:21:57] Talk on the nature of freedom and the aesthetics of life
[00:24:02] Discussion on the implications of coordination and conflict in politics
[00:33:41] Views on the speed of AI development / hard takeoff
[00:35:17] Discussion on potential dangers of AI
[00:36:44] Discussion on the effectiveness of current AI
[00:40:59] Exploration of potential risks in technology
[00:45:01] Discussion on memetic mutation risk
[00:52:36] AI alignment and exploitability
[00:53:13] Superintelligent AIs and the assumption of good intentions
[00:54:52] Humanity’s inconsistency and AI alignment
[00:57:57] Stability of the world and the impact of superintelligent AIs
[01:02:30] Personal utopia and the limitations of AI alignment
[01:05:10] Proposed regulation on limiting the total number of flops
[01:06:20] Having access to a powerful AI system
[01:18:00] Power dynamics and coordination issues with AI
[01:25:44] Humans vs AI in Optimization
[01:27:05] The Impact of AI's Power Seeking Behavior
[01:29:32] A Debate on the Future of AI
...

The Threat of AI - Dr. Joscha Bach and Connor Leahy

June 19, 2023 9:46 pm

HQ version here: https://www.youtube.com/watch?v=Z02Obj8j6FQ

Joscha Bach is a leading cognitive scientist and artificial intelligence researcher with an impressive background in integrating human-like consciousness and cognitive processes into AI systems. With a Ph.D. in Cognitive Science from the University of Osnabrück, Germany, Bach has dedicated his career to understanding and replicating the complexities of human thought and behavior.

In his extensive academic and professional career, Bach has worked with renowned institutions like the MIT Media Lab and the Harvard Program for Evolutionary Dynamics. His research focuses on bridging the gap between natural and artificial intelligence, exploring areas such as cognitive architectures, computational models of emotions, and self-awareness in AI systems.

Bach's work expands the horizons of modern AI research, aiming to create artificial intelligences that function as cognitive agents in social environments. In line with his forward-thinking approach, Bach is also an advocate for ethical concerns in AI research and has actively contributed to discussions on AI safety and its long-term impact on society.

Connor Leahy is the CEO of Conjecture and was the ex-lead and co-founder of EleutherAI. Conner Leahy is an AI researcher working on understanding large ML models and aligning them to human values. Conjecture is a team of researchers dedicated to applied, scalable AI alignment research.
Connor believes that transformative artificial intelligence will happen within our lifetime. He also believes that powerful, advanced AI will be derived from modern machine learning architectures and techniques like gradient descent. Connor is currently the main spokesperson for the AI alignment movement.

https://twitter.com/NPCollapse
https://www.conjecture.dev/

Moderated by Dr. Tim Scarfe (https://xrai.glass/ and https://twitter.com/MLStreetTalk)
...

Debate On AGI: Existential or Non-existential? (Connor Leahy, Joseph Jacks) [MLST LIVE]

May 31, 2023 6:12 pm

MLST is hosting a LIVE debate on AGI risk.

Support us! https://www.patreon.com/mlst
MLST Discord: https://discord.gg/aNPkGUQtc5
Twitter: https://twitter.com/MLStreetTalk

Transcript and summary: https://docs.google.com/document/d/106R8NLgjH9jkF5sQ4sgWXdLvUNhea3XYy8Uy---k3Vo/edit?usp=sharing

Joseph Jacks will argue for open source AGI and why he believes that it will NOT pose an existential threat to humanity any differently than humanity itself already poses an existential threat to humanity.

Connor Leahy will argue for NOT building AGI and why he believes that it will absolutely pose an existential thread to humanity.

https://www.linkedin.com/in/josephjacks/
https://www.linkedin.com/in/connor-j-leahy/
...

Connor Leahy on Aliens, Ethics, Economics, Memetics, and Education

February 2, 2023 6:33 pm

Connor Leahy from Conjecture joins the podcast for a lightning round on a variety of topics ranging from aliens to education. Learn more about Connor's work at https://conjecture.dev

Social Media Links:
➡️ WEBSITE: https://futureoflife.org
➡️ TWITTER: https://twitter.com/FLIxrisk
➡️ INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/
➡️ META: https://www.facebook.com/futureoflifeinstitute
➡️ LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/
...

Connor Leahy & Eliezer Yudkowsky - Japan AI Alignment Conference 2023

March 24, 2023 12:31 pm

Q&A on AI Alignment by Connor Leahy and Eliezer Yudkowsky.

Recorded March 11 at the 2023 Japan AI Alignment Conference, organized by Conjecture and ARAYA.

https://jac2023.ai/
...

Connor Leahy on how to build a good future with AI

November 13, 2024 11:21 pm

In this interview, Connor Leahy talks about how we need to inform the public about AI risks so that governments will take action. Connor is the CEO of an AI startup and is involved in government advocacy work as well.

We discuss several categories of people that support the AGI race and why they do so. Connor describes why, for instance, he believes he was naive to be an accelerationist when he was young.

The impetus for this interview is a document called the compendium which Connor and others created. This is designed to be a resource for the general public to convince them why the AGI race is harmful, and outlines potentially how we might create a good future for our species.

#agi #aisafety #goodfuture

The Compendium
https://www.thecompendium.ai/

Conjecture
https://www.conjecture.dev/

Connor Leahy
https://en.wikipedia.org/wiki/Connor_Leahy

0:00 Intro
0:15 Connor Leahy intro
0:47 Contents
1:01 Part 1: Introduction to Conjecture
1:39 How the field of AI safety is doing as a whole
2:56 How neural networks are different than software
3:51 What if we avoid making agents?
4:50 Comparison to Chinese room argument
5:29 The role of governments and the public
6:44 Are governments informed enough?
7:13 The Compendium: inform the public
7:59 Connection to my own YouTube channel
8:13 Can an AI system go off the rails on its own?
9:52 Part 2: The race to AGI
10:37 What if AGI already happened
11:00 Open source development of AGI
11:22 Would AGI be visible in the world?
12:16 A lot of confusing things happening
12:51 Earth's season finale
13:22 Why do people keep running the AGI race?
14:03 Five main ideologies
14:11 Group 1: the Utopists
15:11 Why utopism ideology is dangerous
15:52 Group 2: Big Tech
16:22 AI labs have been adopted by Big Tech
16:48 Group 3a: the Zealots
17:48 Group 3b: the Nihilists
18:48 Group 4: the Accelerationists
19:43 Accelerationists are naive libertarians
20:50 Technology is neither good nor bad
21:20 Open source perspective
22:03 Group 5: the Opportunists
22:23 AI has attracted crypto people
22:58 Part 3: A good future
23:26 We have to make it happen, it's not by default
24:11 Talking to a lot of people to find problem solvers
24:51 For nukes, there is a serious group of people
25:24 There is no such group for AI
26:10 How can society take time to solve these problems?
26:54 Citizens have to start taking action
27:15 Advertising the Compendium
28:10 Conclusion and outro
...

Interviews and Talks

Industry Leaders and Notable Public Figures

Explainers

Learn about the issue by some of the best explainers out there

Lethal Intelligence Microblog

Blow your mind with the latest stories

Favorite Microbloggers

Receive important updates!

Your email will not be shared with anyone and won’t be used for any reason besides notifying you when we have important updates or new content

×