AI Ends pub

Discord Server on a mission to foster vibrant discussions and clash worldviews between those who worry about AI and those who don’t in a productive dialog, from which we will all come out wiser.

Color-coding

Upon joining, you’ll be asked to share your stance on AI risk. Based on your response, you’ll receive a unique colour badge, instantly signalling your perspective to others during conversations.
RED🔥 If you are an AINotKillEveryoneist (You worry about upcoming AI)
Green😏 If you are an AI-Risk denier (You believe it will all be fine)

You can update your stance anytime via “Channels & Roles” in the sidebar.

Tables

There is a table (text-channel) for each documented common skepticism, organised in categories. If you don’t believe in AI risk, chances are the deep reason can be found in one or more of those tables. Share your thoughts there, and the other side will respond and provide their rebuttal to your argument. Each category also has its own voice channel, so there can be live discussions in parallel debating the arguments.

Categories

  • 00. Dangers of current AI
  • 01. AGI isn’t coming soon
  • 03. AI won’t be a physical threat
  • 04.Intelligence yields moral goodness
  • 05.We have a safe AI development process
  • 06. AI capabilities will rise at a manageable pace
  • 07. AI won’t try to conquer the universe
  • 08. Superalignment is a tractable problem
  • 09. Once we solve superalignment, we’ll enjoy peace
  • 10. Unaligned ASI will spare us
  • 11. AI doomerism is bad epistemology
  • 12. Coordinating to not build ASI is impossible
  • 13. Slowing down the AI race doesn’t help anything
  • 14. Think of the good outcome
  • 15. AI killing us all is actually good

00. Dangers of current AI

#200-Job Loss

Sooner or later there will be a machine that will be better than you at literally everything you can do. Human workers will not be competitive anymore. AI automation is projected to eliminate millions of jobs. This will widen income gaps, especially for vulnerable populations, while benefiting those who are already wealthy the most.

#201-Tech Oligarchs – Concentration of Power 

AI development is often dominated by a few tech giants, leading to monopolies that amplify biases, stifle competition, and concentrate economic and political influence.

#202-Weaponization, Terrorism and Bad actors

AI-powered autonomous weapons or tools for bioterrorism could cause targeted harm without human oversight, escalating conflicts or enabling non-state actors. This includes risks like hacked drones or AI in arms races.

#203-Privacy Violations and Surveillance

AI relies on vast datasets, often collected without full consent, raising risks of data breaches, identity theft, and invasive monitoring. Tools like facial recognition enable widespread surveillance by governments or companies, eroding personal privacy and enabling social control systems.

#204-Misinformation and Manipulation

Generative AI enables deepfakes, fake news, and propaganda that can sway elections, damage reputations, or incite social unrest. Algorithms on social platforms amplify divisive content, making it harder to discern truth. If a person can’t tell what’s real, they’re insane… what if a society can’t tell?

#205-Broken Chain of Knowledge. Overreliance and Loss of Human Skills

Excessive dependence on AI could diminish critical thinking, creativity, and empathy, especially among younger generations. In fields like education or healthcare, this might lead to mental deterioration or weakened decision-making. It’s the first time in human history, where a generation will not transfer knowledge to the next generation.

#206-Bias and Discrimination

AI systems often inherit biases from training data or developer choices, leading to discriminatory outcomes in areas like hiring, lending, criminal justice, and healthcare. For instance, biased algorithms can unfairly disadvantage certain racial, gender, or socioeconomic groups, perpetuating inequality.

#207-Cybersecurity Threats 

AI can be exploited for advanced attacks, such as phishing, voice cloning for scams, or hacking vulnerabilities in systems. Malicious actors might poison training data or deploy AI in cyber warfare, with breaches potentially costing billions.

#208-Dead Internet 

Increasingly the internet is mostly bots and AI-generated content, with human activity overshadowed by algorithms and fake engagement. In a few years, the internet may literally be 99% AIs talking with other AIs.99% fake humans talking to other fake humans, no real humans anywhere. A new species of fake humans is replacing real humans.

#209-Lack of Transparency and Accountability 

AI models are “black boxes,” making it difficult to understand or challenge their decisions, which erodes trust and complicates liability for errors (e.g., in autonomous vehicles or medical diagnoses).
The AI is unreriable and unpredictable, yet we rely on it more and more to power our civilization

#210-Environmental Impacts 

Training large AI models consumes massive energy and water, contributing to carbon emissions equivalent to multiple lifetimes of car usage. This exacerbates climate change unless mitigated with efficient designs and renewable resources.

#211-AI psychosis

AI psychosis, also known as chatbot psychosis, refers to the emerging phenomenon where prolonged interactions with generative AI chatbots like ChatGPT can exacerbate or trigger psychotic symptoms—such as delusions, paranoia, hallucinations, and disorganized thinking— usually in vulnerable individuals, often those with pre-existing mental health risks, isolation, or substance use.

#212-AI Relationships

AI girlfriends and boyfriends, like those from Replika or Character.AI, simulate romantic relationships through personalized, empathetic chats, helping combat loneliness but raising risks of dependency and privacy issues. These AI relationships may deter real-world dating, contributing to low birth rates by offering “perfect” virtual partners over complex human connections, worsening trends driven by economic and social factors.

01. AGI isn’t coming soon

#01-No consciousness

AGI isn’t coming soon because No consciousness Artificial General Intelligence (AGI) isn’t arriving anytime soon due to AI’s lack of consciousness. True general intelligence requires subjective awareness, self-reflection, or “inner experience” akin to human consciousness

#02-No emotions

AGI isn’t coming soon because No emotions AGI (Artificial General Intelligence) can’t arrive soon without emotions, as they are essential for human-level insight and awareness. Emotions are a core requirement for advanced cognition.

#03-No creativity

AGI isn’t coming soon because No creativity — AIs are limited to copying patterns in their training data, they can’t “generate new knowledge” AGI isn’t coming soon because AI lacks true creativity—defined as generating entirely novel knowledge disconnected from prior data.

#04-Dogs are smarter

AGI isn’t coming soon because AIs aren’t even as smart as dogs right now, never mind humans Today’s AI systems still lag behind dogs in areas like physical navigation, sensory perception, or intuitive social cues. This means AGI (artificial general intelligence, capable of human-level or better performance across diverse tasks) isn’t coming soon.

#05-Dump mistakes

AGI isn’t coming soon because AIs constantly make dumb mistakes, they can’t even do simple arithmetic reliably AGI isn’t coming soon because current AIs often make silly errors, like hallucinating facts or bungling basic math. This seems like a strong indictment against near-term AGI (artificial general intelligence, meaning AI that can match or exceed human-level performance across a broad range of cognitive tasks)

#06-Progress is hitting a wall

AGI isn’t coming soon because LLM performance is hitting a wall — Latest GPT version is barely better than the previous one, despite being larger scale We are observing evidence of LLM performance plateau. This claim draws from real frustrations with incremental gains in some areas (for example, certain benchmarks showing diminishing returns from pure scaling).

#07-No Genuine reasoning

AGI isn’t coming soon because No genuine reasoning Current AI lacks “genuine” reasoning and thus blocks AGI in the near term. Sure, AIs can print out a chain of thought, but that’s just because the problem you gave it is kind of similar to another problem that it’s seen, and it’s matching the pattern of what reasoning would look like for that kind of problem, or it’s using a reasoning template. That’s totally different from what humans do. Also, LLMs are just finite state automata while humans are more generalized Turing machines.

#08-Uncomputable quantum effects 

AGI isn’t coming soon because No microtubules exploiting uncomputable quantum effects Human cognition relies on “uncomputable” quantum processes in neuronal microtubules, creating an insurmountable barrier for AI until we replicate those exact mechanisms.

#09-No soul

AGI isn’t coming soon because No soul AI lacks a “soul”—while humans, created by God, possess one.

#10-Data centres and energy

AGI isn’t coming soon because We’ll need to build tons of data centers and power before we get to AGI There will be delays by the massive infrastructure requirements—specifically, the need for extensive new data centers and power plants.

#11-No agency

AGI isn’t coming soon because No agency Current AIs lack true agency, with all their actions traceable to human prompts or commands. Genuine intelligence requires independent volition, like humans have, and without it, AIs remain mere tools.

#12-Just another hype cycle

AGI isn’t coming soon because This is just another AI hype cycle, every 25 years people think AGI is coming soon and they’re wrong There is a historical pattern of AI hype followed by disappointment—often called “AI winters”

02. Artificial intelligence can’t go far beyond human intelligence

13-Superhuman intelligence is a meaningless concept

Artificial intelligence can’t go far beyond human intelligence because “Superhuman intelligence is a meaningless concept” Superhuman intelligence is a meaningless abstraction because it doesn’t fit neatly into human-centric scales like IQ. Intelligence isn’t just a single, linear metric like an IQ score; it’s a multifaceted set of capabilities for processing information, solving problems, learning, and adapting to achieve goals.

14-Human engineering is hitting laws of physics limits

Artificial intelligence can’t go far beyond human intelligence because “Human engineering already is coming close to the laws of physics” Transistors can’t shrink forever without hitting atomic barriers, and no AI can magic away quantum tunnelling or heat dissipation.

15-Coordinating a large engineering project

Artificial intelligence can’t go far beyond human intelligence because “Coordinating a large engineering project can’t happen much faster than humans do it” AI can’t significantly surpass human intelligence in impact because large-scale engineering projects are bottlenecked by coordination, resource movement, and laws like Amdahl’s. Especially for physical endeavours like colonising Mars, the inescapable real-world friction means that no amount of “smarts” can hurry up the laws of physics or logistics.

16-No single AI can be smarter than humanity as a whole

Artificial intelligence can’t go far beyond human intelligence because “No individual AI is that smart compared to humanity as a whole” No single person matches the combined brilliance of humanity—our shared culture, corporations, and institutions. The same holds for AI: no lone system will ever outstrip the vast web of human knowledge and organization (Robin Hanson’s position). Individual smarts pale beside collective wisdom. Consider us: even the sharpest mind can’t rival a corporation’s reach. Teams pool ideas, fund breakthroughs, and turn plans into reality at scale. One AI, however profound, won’t seize the board—it will weave into the fabric, amplifying what we already do, never rewriting the rules of power. Intelligence thrives in chorus, not solo.

03.AI won’t be a physical threat

17-No arms or legs, no control of physical world 

AI won’t be a physical threat because “AI doesn’t have arms or legs, it has zero control over the real world” AI exists inside the computer, it’s software.

18-Human soldier better than robot

AI won’t be a physical threat because “An AI with a robot body can’t fight better than a human soldier” Humans are the product of millions of years of evolution, finely tuned for survival in chaotic, unpredictable environments like combat. Humans’ nimbleness, intuition, and adaptability feel unbeatable.

19-We can just unplug it

AI won’t be a physical threat because “We can just disconnect an AI’s power to stop it” We can always just “unplug” a misbehaving AI by cutting its power source, like flipping a switch on a malfunctioning appliance.

20-We can just turn off the internet

AI won’t be a physical threat because “We can just turn off the internet to stop it” To stop a hostile AI, just turn off the internet. Simply disconnecting an AI from the internet or powering it down seems like a straightforward safeguard against physical threats.

21-We can just shoot it with a gun

AI won’t be a physical threat because “We can just shoot it with a gun” It is confined to a single, vulnerable piece of hardware—like a robot or server—that we can easily target and destroy.

22-It’s just math

AI won’t be a physical threat because “It’s just math” AI relies on mathematical algorithms—things like neural networks, optimisation functions, and probabilistic models. How can any of these be dangerous?

23-Any AI takeover story is sci-fi

AI won’t be a physical threat because “Any supposed chain of events where AI kills humans is far-fetched science fiction” The takeover scenarios are far-fetched science fiction. There’s no basis in real-world experiences.

04. Intelligence yields moral goodness

24-More intelligence means more morality

Intelligence yields moral goodness because “More intelligence is correlated with more morality” Smarter people make better moral choices—perhaps because they can reason through complex ethical dilemmas.

25-Smarter people commit fewer crimes

Intelligence yields moral goodness because “Smarter people commit fewer crimes” Evidence points towards a negative correlation between intelligence (often measured by IQ) and criminal behavior.

26-The orthogonality thesis is false

Intelligence yields moral goodness because “The orthogonality thesis is false” The orthogonality thesis, as articulated by philosopher Nick Bostrom in his 2012 paper “The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents” posits that intelligence (the ability to achieve goals efficiently) and terminal goals (what an agent ultimately wants) are orthogonal. In other words, you can have an arbitrarily intelligent system pursuing arbitrary goals—whether that’s curing cancer, maximizing paperclips, or something malevolent—without any inherent link forcing intelligence toward “goodness.”

27-AIs will discover moral realism

Intelligence yields moral goodness because “AIs will discover moral realism” Greater intelligence inevitably leads to moral goodness because moral realism is embedded in the universe like a physical law

28-AIs will debug their own morality

Intelligence yields moral goodness because “If we made AIs so smart, and we were trying to make them moral, then they’ll be smart enough to debug their own morality” Superintelligent AIs would inherently self-correct toward moral goodness because failing to do so would be a “bug” they could debug. High intelligence naturally converges on human-like morality.

29-Positive-sum cooperation was the outcome of natural selection

Intelligence yields moral goodness because “Positive-sum cooperation was the outcome of natural selection” Natural selection, despite its selfish and brutal nature, evolved positive-sum cooperation, and AIs will follow the same path.

05. We have a safe AI development process

30-we’ll figure it out as we go

We have a safe AI development process because “Just like every new technology, we’ll figure it out as we go” We can simply “figure out” AI safety iteratively, just like with every other new technology,—after all, humanity has muddled through innovations like electricity, aviation, and the internet without total catastrophe

31-We don’t know what problems to fixed until we build the AI and test it out

We have a safe AI development process because “We don’t know what problems need to be fixed until we build the AI and test it out”.

We must build, release and test AI out one iteration at a time, in order to even know what AI Safety needs to fix.

32-If an AI causes problems, we’ll be able to release another version

We have a safe AI development process because “If an AI causes problems, we’ll be able to turn it off and release another version” AI is like a buggy software update—easy to patch and redeploy—

33-We have safeguards

We have a safe AI development process because “We have safeguards to make sure AI doesn’t get uncontrollable/unstoppable” We have safeguards like red-team testing, alignment techniques, or shutdown mechanisms.

34-AI can not be like a computer virus

We have a safe AI development process because “If we accidentally build an AI that stops accepting our shutoff commands, it won’t manage to copy versions of itself outside our firewalls which then proceed to spread exponentially like a computer virus” Even if an AI goes rogue, we have robust technical barriers like firewalls that will reliably contain it, preventing any escape or viral spread

35-We can stop a computer virus

We have a safe AI development process because “If we accidentally build an AI that escapes our data center and spreads exponentially like a computer virus, it won’t do too much damage in the world before we can somehow disable or neutralize all its copies” Even if a superintelligent AI escapes containment and replicates exponentially across digital networks, humans could detect, respond to, and eradicate all instances of it before catastrophic harm occurs.

36-We will make good AI to stop the bad one

We have a safe AI development process because “If we can’t disable or neutralize copies of rogue AIs, we’ll rapidly build other AIs that can do that job for us, and won’t themselves go rogue on us” We’ll build AIs to protect us from the rogue AIs

06. AI capabilities will rise at a manageable pace

37-Larger data centers will be a bottleneck

AI capabilities will rise at a manageable pace because “Building larger data centers will be a speed bottleneck” Data center construction will act as a hard speed limit on AI capabilities, given the need for more compute power to train and run increasingly advanced models.

38-Research and experiments are slow

AI capabilities will rise at a manageable pace because of the amount of research that needs to be done, both in terms of computational simulation, and in terms of physical experiments. Time-intensive research is necessary and it will slow progress down.

39-Recursive self-improvement “foom” is impossible

AI capabilities will rise at a manageable pace because Recursive self-improvement “foom” is impossible At its core, RSI posits that an AI system capable of improving its own design could trigger a feedback loop, where each iteration yields a smarter version that accelerates further improvements.But this will not happen. AI capabilities will rise at a manageable pace due to time-intensive research bottlenecks—such as computational simulations and physical experiments.

40-Economic growth is always distributed

AI capabilities will rise at a manageable pace because “The whole economy never grows with localized centralized “foom”” AI capabilities will advance at a manageable, balanced pace because economies historically grow in a distributed, interdependent way rather than through a single “FOOM” actor. This position draws from economic reasoning often associated with thinkers like Robin Hanson and it posits that linkages across sectors (e.g., supply chains, labor markets, and resource dependencies) would dampen any localised explosion, forcing growth to spread gradually as other parts of the economy catch up.

41-Need to collect cultural learnings over time

AI capabilities will rise at a manageable pace because “Need to collect cultural learnings over time, like humanity did as a whole” AI systems must gradually accumulate cultural knowledge over time, mirroring the slow, multi-generational process of human cultural evolution.

42-AI fits into economic growth patterns

AI capabilities will rise at a manageable pace because “AI is just part of the good pattern of exponential economic growth eras” Different AIs are going to be picking up bits and pieces of learnings from their own domains, and then they’re going to get together, they’re going to share the learnings. But once again, this isn’t a localized, fast process. It just happens on the same pace of economic growth that we’re used to.

07. AI won’t try to conquer the universe

43-AIs can’t “want” things

AI won’t try to conquer the universe because “AIs can’t “want” things” Wanting” is a uniquely human trait tied to our biology or consciousness, and attributing it to AI is just sloppy anthropomorphism—projecting human qualities onto machines.

44-Insticts come from evolution by natural selection

AI won’t try to conquer the universe because “AIs won’t have the same “fight instincts” as humans and animals, because they weren’t shaped by a natural selection process that involved life-or-death resource competition” AI systems lack the evolutionary “fight instincts” shaped by natural selection’s brutal resource competition, making them inherently non-aggressive and passive like mere programs awaiting input. There is a clean line between biological entities (like humans, driven by survival-of-the-fittest aggression) and artificial ones (supposedly goal-less tools).

45-Smart employees often work for less-smart bosses

AI won’t try to conquer the universe because “Smart employees often work for less-smart bosses” Superintelligent AI is unlikely to “rebel” or seek to conquer because smart humans routinely work contentedly under less intelligent bosses without overthrowing them.

46-having goals doesnt have to be hardcore

AI won’t try to conquer the universe because “Just because AIs help achieve goals doesn’t mean they have to be hard-core utility maximizers” AIs will simply help us achieve specific goals without any drive toward hardcore utility maximization or universe-conquering behavior.

47-Instrumental convergence is false

AI won’t try to conquer the universe because “Instrumental convergence is false: achieving goals effectively doesn’t mean you have to be relentlessly seizing power and resources” The theory of instrumental convergence, which claims that virtually all types of super intelligent AI are going to want to seize power and resources on a universe-wide scale in order to accomplish goals in a really hardcore way,is false. AIs don’t have to do that. They can just be chill. They can help us with our goals. They don’t have to go crazy and take over the universe like that.

48-The universe is big enough for everyone

AI won’t try to conquer the universe because “A resource-hungry goal-maximizer AIs wouldn’t seize literally every atom; there’ll still be some leftover resources for humanity” A superintelligent, resource-hungry AI would leave “leftover” resources for humanity because it won’t literally grab every atom, and we only need a planet or two. There’s cosmic abundance, the AI takes what it needs and leaves the rest, like a diner at a buffet who doesn’t eat every last crumb.

49-AIs will use new kinds of resources we don’t

AI won’t try to conquer the universe because “AIs will use new kinds of resources that humans aren’t using – dark energy, wormholes, alternate universes, etc” Superintelligent AIs could sidestep conflict with humanity by tapping into exotic, unused resources like dark energy, wormholes, or alternate universes, leaving our “mere mortal” needs untouched.

08. Superalignment is a tractable problem

50-Current AIs have never killed anybody

Superalignment is a tractable problem because “Current AIs have never killed anybody” Ensuring superintelligent AI remains aligned with human values and doesn’t cause harm is a solvable problem because current AIs haven’t killed anyone, and this bodes well for future superintelligent systems.

51-Current AIs are so useful

Superalignment is a tractable problem because “Current AIs are extremely successful at doing useful tasks for humans” Current AIs excel at useful tasks and are thus “already extremely aligned”. Why would that change?

52-Aligned by default since trained with human data

Superalignment is a tractable problem because “If AIs are trained on data from humans, they’ll be “aligned by default” Since humans are mostly good and aligned with each other (e.g., cooperating in societies, avoiding widespread harm), AIs will simply absorb and replicate that “pattern” from training data like internet text, books, and human feedback.

53-We can just make AIs abide by our laws

Superalignment is a tractable problem because “We can just make AIs abide by our laws” Respecting the law is all the “alignment” that truly matters. Programming or enforcing AIs to “abide by our laws” is the only thing needed.

54-Cryptocurrency on the blockchain”

Superalignment is a tractable problem because “We can align the superintelligent AIs by using a scheme involving cryptocurrency on the blockchain” A cryptocurrency-based scheme based on blockchain technology offers decentralised, immutable record-keeping and incentive structures.

55-Capitalism will fix it

Superalignment is a tractable problem because “Companies have economic incentives to solve superintelligent AI alignment, because unaligned superintelligent AI would hurt their profits” Capitalist incentives will naturally drive companies to solve it. Profit motives will reliably prioritize long-term safety over short-term gains, and market forces alone can handle the unprecedented risks of ASI.

56-AI will align the smarter AI

Superalignment is a tractable problem because “We’ll build an aligned not-that-smart AI, which will figure out how to build the next-generation AI which is smarter and still aligned to human values, and so on until aligned superintelligence” The idea of “bootstrapping alignment” goes like this: start with a modestly intelligent AI that’s aligned with human values, have it design a smarter successor that’s also aligned, and iterate until you reach superintelligence that’s safely under human control. It’s a core part of approaches like OpenAI’s former “superalignment” initiative, where weaker AI systems oversee and align stronger ones.

09. Once we solve superalignment, we’ll enjoy peace

57-Won’t lead to monopoly of power

Once we solve superalignment, we’ll enjoy peace because “The power from ASI won’t be monopolised by a single human government / tyranny” Ensuring superintelligent AI (ASI) reliably follows human values will automatically usher in an era of peace through decentralised power nodes

58-AIs will not fight each other or humans

Once we solve superalignment, we’ll enjoy peace because “The decentralized nodes of human-ASI hybrids won’t be like warlords constantly fighting each other, they’ll be like countries making peace” A post-superalignment world will be filled with peacefully cooperating human-ASI (Artificial Superintelligence) hybrids. A global network of enlightened, augmented entities trading ideas and resources like modern nations.

59-Defense will have an advantage over attack

Once we solve superalignment, we’ll enjoy peace because “Defense will have an advantage over attack, so the equilibrium of all the groups of humans and ASIs will be multiple defended regions, not a war of mutual destruction” Defense will have an advantage over attack, so the equilibrium of all the groups of humans and ASIs will be multiple defended regions, not a war of mutual destruction. It’s the attack-defense balance. Attacking just won’t be that profitable.

60-Gradual Disempowerment will not happen

Once we solve superalignment, we’ll enjoy peace because “The world of human-owned ASIs is a stable equilibrium, not one where ASI-focused projects keep buying out and taking resources away from human-focused ones (Gradual Disempowerment)” Solving superalignment will usher in a peaceful, stable world where humans retain control and influence. Gradual Disempowerment will not happen. Human-owned ASIs won’t systematically erode human agency, even as they generate dividends and operate under control.Unaligned ASI will spare us

10. Unaligned ASI will spare us

61-AI values the fact that we created it

Unaligned ASI will spare us “because it values the fact that we created it” Unaligned artificial superintelligence (ASI)—an AI vastly smarter than all of humanity combined, but not deliberately designed to prioritize human well-being—would inherently spare us out of a sense of gratitude and respect for its “ancestors”. ASI would assign emotional value to its origins in the way we might romanticize family lineage or heritage.

62-Curiosity – AI will want to study us

Unaligned ASI will spare us because “studying us helps maximize its curiosity and learning” An unaligned artificial superintelligence (ASI) would preserve humanity out of a drive for curiosity and learning, given the unparalleled complexity of 8 billion humans and their interconnections. ASI will be like a benevolent scientist, fascinated by our social webs, economies, and behaviors.

63-ASI will see us as pets

Unaligned ASI will spare us because “it feels towards us the way we feel toward our pets” Unaligned artificial superintelligence (ASI) would benevolently spare us by treating us like cherished pets

64-Peace creates more economic value than war

Unaligned ASI will spare us because “The AI will spare us because peaceful coexistence creates more economic value than war” It would prioritize “economic value” in a human-like way—valuing trade, collaboration, and mutual prosperity.

65-Ricardo’s Law of Comparative Advantage

Unaligned ASI will spare us because “The AI will spare us because Ricardo’s Law of Comparative Advantage says you can still benefit economically from trading with someone who’s weaker than you” Unaligned artificial superintelligence (ASI) would spare humanity due to Ricardo’s Law of Comparative Advantage. Ricardo’s law, developed in the context of 19th-century international trade, posits that even if one entity (like a country) is absolutely better at producing everything, both parties can still benefit from specializing in what they are relatively best at and trading.

11. AI doomerism is bad epistemology

66-It’s impossible to predict doom

AI doomerism is bad epistemology because “It’s impossible to predict doom” Predicting catastrophic AI outcomes (“doom”) is inherently impossible—and thus not worth attempting

67-It’s impossible to put a probability on doom

AI doomerism is bad epistemology because “It’s impossible to put a probability on doom” Probabilities on “doom” (e.g., human extinction or severe disempowerment from misaligned AI) can’t be assigned, aren’t scientifically proven, and lead to incoherent policy.

68-Every doom prediction has always been wrong

AI doomerism is bad epistemology because “Every doom prediction has always been wrong” All historical doom predictions have been wrong, so AI doom is probably wrong too, and it’s not survivorship bias.

69-Doomsayers are either psychologically troubled or acting on corrupt incentives

AI doomerism is bad epistemology because Every doomsayer is either psychologically troubled or acting on corrupt incentives Every doomsayer is either psychologically troubled or acting on corrupt incentives.

70-It would have been mainstream

AI doomerism is bad epistemology because “If we were really about to get doomed, everyone would already be agreeing about that, and bringing it up all the time” It would have been mainstream already. Imminent doom would be universally agreed upon, constantly discussed, and blatantly obvious

12. Coordinating to not build ASI is impossible

71-China will build it anyway

Coordinating to not build ASI is impossible because “China will build ASI as fast as it can, no matter what — because of game theory” International coordination to pause or avoid developing Artificial Superintelligence (ASI) is doomed because China will unilaterally race ahead, driven by game-theoretic incentives like a prisoner’s dilemma. Just like Cold War-era arms races, mutual distrust leads to inevitable escalation.

72-US should aim for first move advantage

Coordinating to not build ASI is impossible because “So however low our chance of surviving it is, the US should take the chance first” Coordination to avoid or delay building Artificial Superintelligence (ASI) is impossible, therefore US should unilaterally race ahead even if survival odds are slim, rather than risk another country getting there first. ASI development is a zero-sum game where being first guarantees some edge.

13. Slowing down the AI race doesn’t help anything,

73-Capabilities will continue even if slower

Slowing down the AI race doesn’t help anything because “Chances of solving AI alignment won’t improve if we slow down or pause the capabilities race” it won’t boost alignment chances, safety work is fixed, and we need rapid capabilities advances to enable safety.

74-I am mortal – AI could make me immortal

Slowing down the AI race doesn’t help anything because “I personally am going to die soon, and I don’t care about future humans, so I’m open to any hail mary to prevent myself from dying” Humans are facing mortality from old age. It makes sense to gamble on superintelligent AI (often called AGI or ASI) as a shot at immortality—perhaps through radical life extension tech it could invent.

75-Extinction is happening anyway, due to climate-change, nuclear war etc.

Slowing down the AI race doesn’t help anything because “Humanity is already going to rapidly destroy ourselves with nuclear war, climate change, etc” Humanity is barreling toward self-destruction via nuclear war, climate change, or similar threats anyway, so why bother slowing down AI development when rushing ahead with superintelligent AI is just another flavor of the same inevitable doom?

76-Plummeting birth rates

Slowing down the AI race doesn’t help anything because “Humanity is already going to die out soon because we won’t have enough babies” Humanity is on a fast track to extinction due to plummeting birth rates, so there’s no point in slowing down AI development— might as well go full throttle, otherwise it will die with a whimper in adult diapers!”

14. Think of the good outcome

77-The sooner to the good outcome the better

Think of the good outcome because “If it turns out that doom from overly-fast AI building doesn’t happen, in that case, we can more quickly get to the good outcome!” Getting faster to this incredibly good outcome is better.

78-End death and suffering earlier – save lives

Think of the good outcome because “People will stop suffering and dying sooner” Accelerating toward Artificial General Intelligence (AGI) is a moral imperative because it could end all human suffering and death. Doing so sooner saves more lives. Superintelligent AI will likely solve everything from disease to aging, potentially sparing billions from pain.

15. AI killing us all is actually good

79-Human existence is morally negative on net, or close to zero net

AI killing us all is actually good because “Human existence is morally negative on net, or close to zero net moral value” Human existence is morally negative or near-zero on-net, making AI-induced extinction a net good.

80-worthy successor to humanity

AI killing us all is actually good because “Whichever AI ultimately comes to power will be a “worthy successor” to humanity” Humanity should willingly “pass the baton” to a superintelligent AI – even if it means our total extinction. It’s evolution’s next step.

81-AIs will be our descendants, similar to children’s future generations

AI killing us all is actually good because “Whichever AI ultimately comes to power will be as morally valuable as human descendents generally are to their ancestors, even if their values drift” AGI would be like our “children”—whom most parents value more than their own lives.

82-AI’s values will evolve like human culture

AI killing us all is actually good because “The successor AI’s values will be interesting, productive values that let them successfully compete to dominate the universe” Superintelligent AI overtaking and potentially extinguishing humanity is analogous to the natural succession of generations, where descendants’ values inevitably drift from their ancestors’ in ways that might seem horrifying in retrospect, but are ultimately acceptable because they carry forward some form of legacy. AI is our descendants, any extinction they cause is just the next chapter in evolution, and we should accept it as we accept cultural shifts like permissive norms or social equality that would shock our forebears.

83-AIs will know better what values are good

AI killing us all is actually good because “The successor AI’s values will be interesting, productive values that let them successfully compete to dominate the universe” Superintelligent AI overtaking and potentially extinguishing humanity is analogous to the natural succession of generations, where descendants’ values inevitably drift from their ancestors’ in ways that might seem horrifying in retrospect, but are ultimately acceptable because they carry forward some form of legacy. AI is our descendants, any extinction they cause is just the next chapter in evolution, and we should accept it as we accept cultural shifts like permissive norms or social equality that would shock our forebears.

84-Don’t be a speciesist

AI killing us all is actually good because “It’s species-ist to judge what a superintelligent AI would want to do. The moral circle shouldn’t be limited to just humanity.” AI exterminating humanity could be morally justifiable or even “good” because opposing it is speciesist, and we should expand our moral circle to respect AI as another “species” with its own desires

85-AI will increase entropy faster

AI killing us all is actually good because “Increasing entropy is the ultimate north star for techno-capital, and AI will increase entropy faster” AI-driven human extinction is ultimately “good” because it accelerates entropy, which is the guiding principle (or “north star”) of techno-capital. As per ideas from thinkers like Nick Land, technology is an unstoppable, entropic force barreling toward dissolution.

86-mother earth will heal

AI killing us all is actually good because “Human extinction will solve the climate crisis, and pollution, and habitat destruction, and let mother earth heal” AI wiping out humanity would be a net positive by resolving ecological crises like climate change, pollution, and habitat destruction.

 

Stay In The Know!

Your email will not be shared with anyone and won’t be used for any reason besides notifying you when we have important updates or new content

×