P(doom) is p dumb

P(doom) is p dumb

The name p(doom) comes from probability notation: the "p" stands for "probability of", and "doom" refers—colloquially and dramatically—to the end of humanity (or something close to it) due to artificial intelligence.

So:

  • p(doom) = probability that AI leads to catastrophic outcomes (often defined as extinction-level risk or irreversible civilizational collapse).

Eliezer Yudkowsky puts it above 95%. Roman Yampolskiy says 99.999999%.

The term was born from a mix of Bayesian forecasting, utilitarian ethics, and online communities like LessWrong. In these cultures, it's normal to quantify uncertainties, moral trade-offs, and future timelines. But what happens when you start putting odds on whether or not you're actively working toward the end of the world?

"We are playing with a system we barely understand," Yudkowsky writes. "And we don't get to roll the dice twice."

From a humanist perspective, this logic is alienating. It treats human life and its ethical dilemmas as input variables in a probabilistic model, reducing existential questions to data and abstraction. In doing so, it sidelines the messy, relational, and often contradictory nature of lived experience. Social context, power dynamics, historical trauma, and differing cultural values are all flattened into speculative risk math. P(doom), as a framework, functions like a kind of moral accounting system—but it’s built on a narrow set of assumptions about what counts, who decides, and what we owe to one another.


Where It Comes From

The term "P(doom)" gained traction around 2023 with the rise of models like GPT-4. Its intellectual lineage runs through thinkers like Nick Bostrom, whose 2014 book Superintelligence argued that advanced AI could become uncontrollable. Institutions like MIRI, OpenAI, and the Future of Humanity Institute helped broadcast these ideas. In 2023, a major survey of over 2,700 AI researchers found that a significant portion—nearly 10%—believed advanced AI could cause human extinction or other similarly bad outcomes. Over half thought there was at least a 10% chance of extremely bad outcomes, and nearly 70% supported more regulation.

The ethical logic behind p(doom) draws heavily from effective altruism and its offshoot, longtermism—philosophies that claim to maximize moral good by prioritizing the far future. Within this frame, quantifying extinction risk doesn’t just seem reasonable—it becomes a moral imperative. On its surface, this seems like a rational response, but the framework is deeply flawed. It’s a worldview that tends to appeal to a particular class of technologists insulated from present-day material harm, whose imagined futures tend to look suspiciously like their idealized present, scaled forever. Its moral calculus treats suffering today as a rounding error compared to hypothetical trillions of future lives.

Critics have pointed out that this obsession with distant futures routinely distracts from urgent, ongoing harms: algorithmic bias, exploitative data labor, surveillance capitalism, environmental damage, and the consolidation of corporate power. As scholar Timnit Gebru has argued, “We don’t need speculation to know that AI is already harming marginalized communities.” Yet within this framework, such harms are often treated as unfortunate side effects—important, maybe, but secondary to the overriding goal of “aligning” future superintelligences. It’s a philosophy that deflects accountability in the present by turning its gaze toward an abstract future few will ever inhabit.


How It’s Used

In Tech Circles

In Silicon Valley, P(doom) is more cultural shorthand than analytical tool. Sam Altman invokes it in interviews; Elon Musk warns of "summoning the demon." Emad Mostaque says there's a "50/50 shot." These numbers show up at conferences, in investor pitches, and on social media.

They signal allegiance to a worldview: Are you a cautious optimist, or a doomer realist? Either way, the casual use of extinction math in these spaces shapes policy and funding debates, even when the methodology is vague.

In Academia

Academic interest in P(doom) is more cautious. Scholars in existential risk studies try to define and model AI-related catastrophe, but no consensus metric exists. Many argue that efforts to estimate doom distract from tangible ethical concerns.

Researcher Abeba Birhane reflected in a 2020 Essay Fair Warning that "First World armchair contemplations, far removed from the current concrete harms, pre‑occupy those that supposedly examine the moral dimensions of AI."

Online Communities

Forums like LessWrong and Alignment Forum host rigorous, speculative, and sometimes surreal discussions about AI risk. P(doom) is treated both as a technical challenge and a moral riddle. These communities shaped the concept’s early development, but remain culturally narrow. Few women or global South voices are prominent in these debates.

This cultural homogeneity is visible in what I’ll call the scoreboard: a running list of publicly tracked p(doom) estimates from PauseAi.info, nearly all from men in elite technical or finance roles.


The Scoreboard

NameAffiliationEstimateNotes
Roman YampolskiyAI safety researcher99.999999%Believes catastrophe is virtually certain.
Eliezer YudkowskyMIRI>95%Advocates for halting AI development entirely.
Dan HendrycksCenter for AI Safety>80%Calls for urgent work on alignment.
Daniel KokotajloFormerly OpenAI70%Expresses deep concern over current trajectory.
Holden KarnofskyOpen Philanthropy10-90%Wide range signals high uncertainty.
Geoff HintonAI pioneer10-50%Publicly increased his estimate in 2023.
Paul ChristianoARC, US AI Safety Institute46%Promotes mechanistic interpretability as a path forward.
Scott AlexanderAstral Codex Ten33%Blends cultural critique with technical interest.
Dario AmodeiCEO, Anthropic10-25%Runs a safety-first AI company.
Elon MuskCEO, X / Tesla / SpaceX10-20%loser-ass cry baby.

These numbers are neither peer-reviewed nor predictive. They're expressions of self-interest and anxiety. That this list of publicly tracked p(doom) numbers is nearly all men raises another question: whose view of the future are we taking seriously?


What to Do With This

P(doom) is not science—it’s a proxy for fear, influence, and worldview. It’s a vibe, a meme, a moral panic, and a warning worth hearing. It reflects real concerns about opaque systems, runaway incentives, and the hubris of optimization. But it also reflects deeper questions: who gets listened to, whose future is protected, and who is ignored? Because doom hits us all.

"There are risks worth managing," writes Brian Christian, author of The Alignment Problem, "but they are rarely the ones we see most clearly."

On AI: I believe AI will be as disruptive as people fear. Our systems are brittle, and those building this technology often show little concern for the harm they’re accelerating. An AI future worth living in will require real accountability, robust regulation, strong ethical guardrails, and collective action—especially around climate and inequality. Dismantling capitalism wouldn’t hurt, either.

And still: I have hope. AI has helped me learn things I thought I couldn’t. It’s made my thinking feel more possible, closer to life. I don’t believe AI must replace our humanity—I believe it can help us exemplify it more clearly.


© 2025 Scott Holzman. All rights reserved. Please do not reproduce, repost, or use without permission.