
Opinion
The doom scale: How SF house parties are deciding the future of AI investment
Why Silicon Valley's party small talk about the apocalypse is now driving billion-dollar investments.
"So what's your p-doom?"
This question comes up shortly after I start a conversation with a group of people in a living room at a mansion in San Francisco. I'm at a house party in one of the many communes you can find in the city. All ten people who live here work in tech, and most of them in artificial intelligence. Some at places like Google, Microsoft, OpenAI, and Meta, and some at "smaller" startups that are only worth a few billion dollars.
After almost a year in Silicon Valley, working at an AI startup, I've heard this question many times at social gatherings, like a casual opening line, like throwing out a comment about the weather.
P-doom, probability of doom- is the number people give for the chance of humanity's or the world's end. It started as a joke in forums of AI researchers and philosophers trying to predict the future. The assumption that each person sets for themselves ranges from zero to one hundred. Zero represents the idea that artificial intelligence will only work for our ultimate good and that no long-term harm will come from its development, and one hundred represents the idea that humanity is going to be destroyed and there's no point in even continuing to plan for the future (unless we manage to stop it).
Kevin, who works as a researcher at one of the big AI companies, asked me this question this week, as if discussing weekend plans.
In the mansion's living room sat he, I, Diana, a product manager at a startup, and Ben, who works at one of the AI safety nonprofits that have sprouted up here like mushrooms after rain. Organizations whose purpose is to monitor AI capabilities and ensure we're not acting irresponsibly.
"I give it 35, but Kevin says he's around 60, even though he bought a Tesla this week- so go figure, maybe he doesn't really believe we'll be extinct," said Diana. "Elon Musk is around 20-30, and Dario Amodei, CEO of Anthropic, is between 10-50. Eliezer Yudkowsky (one of the founders of MIRI, one of the research institutes designed to prevent destruction by artificial intelligence) thinks the number is 95- let's hope he's wrong."
All of us work during the day building these models, and at night- we discuss whether that same artificial intelligence will be what puts an end to our existence.
The people with high numbers are certain the world will end in less than a decade, and when they're not working on breakthrough products- they're burning through their money, living hedonistic lives, doing lots of drugs, in relationship configurations not seen outside Silicon Valley, and encouraging their relatives not to save money for the future because the world is ending anyway.
Those with low numbers are certain that thanks to artificial intelligence, humanity will live in endless abundance, and we'll all be able to lie on the beach all day, paint, and relax.
This thought exercise goes beyond a philosophical discussion of computer geniuses. These numbers affect where the billions of dollars invested in AI companies flow. Those same researchers, product managers, engineers- they are the ones building the future we'll live in.
Take, for example, two companies on opposite sides of the barricade: Safe Superintelligence, with a development center in Tel Aviv, was founded by Ilya Sutskever, one of OpenAI's founders, and is valued at $32 billion. No product, no profit, just the idea of artificial intelligence that will lead humanity to a safe place.
On the other side, OpenAI, whose employees are at all the parties I mentioned earlier, raised funding at a valuation of over $300 billion. That same OpenAI started as a nonprofit organization- Sam Altman and the senior researchers in the organization were devout believers of Eliezer Yudkowsky, that early researcher who warned about the dangers of artificial intelligence. His doctrine was the reason they founded OpenAI. But after the great success, many argue they are the ones advancing us in giant steps toward the end.
Funding flows to both sides of the barricade, depending on what belief the investors hold, that same belief shaped by the researchers, engineers, and tech people at apartment parties.
And behind the scenes - philanthropists and thought leaders worried about what the monopoly created by the leading companies in the race (like OpenAI, Anthropic) can do to humanity. They're channeling millions of dollars to startups they identify as having potential to compete and reduce the influence of the big AI labs. Hoping that those entrepreneurs in the small startups will support the idea of artificial intelligence that's safer for humanity.
And slowly, the Israeli cyber scene is also entering the picture and becoming a hot investment focus as more people find themselves in the camp of big numbers, worried about what could happen if we let the AI monster continue to grow without control.
"What's your number?" I'm asked again and realize there's no way to avoid giving an answer. What can go into this number that represents "the end"? Is it the end of humanity? Global unemployment? Economic crisis or lack of resources?
My number is relatively low, maybe twenty, if only because I'm naturally an optimistic person. But everyone here understands that each person's number doesn't really say much, and the only thing that can be done is to hope that the pessimists among us - are wrong.
Nicole Levin is an executive in a stealth AI startup in San Francisco.














