
Most Promising Startups - 2025
Six forces that could derail the generative AI boom
As generative AI reaches hundreds of millions of users in record time, it now faces a set of growing challenges—from geopolitics and regulation to labor shortages and existential risk—that could slow or reshape its trajectory.
It is doubtful that any new technology in human history has achieved wider and faster adoption than generative artificial intelligence (GenAI). Cars, television, the internet, social media, smartphones—all took years, sometimes decades, to gain widespread traction and reach a critical mass of users. But GenAI did it in weeks. In April, OpenAI founder and CEO Sam Altman stated: “10% of the world now uses our systems a lot.” That is, approximately 800 million users—just two and a half years after the launch of ChatGPT. And this is just one system from one company among many.
Despite the rapid adoption of GenAI services in everyday life, work, school, and beyond, the field faces six major challenges that could cloud continued adoption and future breakthroughs.
1. Trump’s Tariffs
One of the biggest challenges facing the AI sector stems from Donald Trump: the tariffs he imposed on imports into the United States. Developing, training, and running modern AI models is a capital-intensive endeavor, even in the post-DeepSeek era. It requires massive data centers equipped with hundreds of thousands—and soon, millions—of advanced AI processors. The cost of setting up such infrastructure runs into the billions. Collectively, the industry's leaders—OpenAI, Microsoft, Amazon, Google, xAI, and others—have committed as much as $1 trillion to AI infrastructure.
OpenAI, for instance, unveiled a $500 billion U.S.-based infrastructure plan in January. Amazon intends to spend $100 billion this year alone on AI data centers, while Microsoft and Google have pledged $80 billion and $75 billion, respectively. Elon Musk’s xAI is building two U.S. data centers—in Memphis and Atlanta—each costing several hundred million dollars. The Memphis facility alone will contain 200,000 AI processors, with plans to scale up to over a million.
But tariffs threaten to make the entire endeavor significantly more expensive. Almost all advanced AI processors are currently manufactured in Taiwan—now subject to a 32% tariff under Trump. Although the president has suspended these tariffs for 90 days and granted various exemptions for electronics and chips imported from China, the reprieve is temporary.
The threat extends beyond chips to other components and raw materials essential to data center construction—such as steel. The result is likely a sharp increase in the cost of building AI infrastructure. This could trigger higher prices for AI services—reflected in increased subscription fees for offerings like ChatGPT+, or in reduced availability or functionality of free-tier models, nudging more users into paid services. Rising costs could also slow innovation.
A second-order effect could see broader AI adoption stall, especially if the uncertainty created by tariffs contributes to an economic slowdown. Recessionary pressure would likely lead companies to trim IT budgets and delay adopting new technologies.
2. Regulatory Uncertainty
Just days after returning to the White House, President Trump fulfilled a campaign promise by signing an executive order removing regulatory barriers to AI development. Rather than fostering careful, responsible innovation and addressing concerns such as intellectual property protection, Trump prioritized speed, innovation, and U.S. leadership—over caution. Ironically, this has increased uncertainty for companies in the field.
Before this order, the U.S. and European Union—arguably the two most influential players in global tech regulation—had been on similar paths regarding AI oversight. The executive order disrupted that alignment, creating two divergent poles: strict regulatory frameworks in the EU and a near-total deregulatory environment in the U.S.
This divergence creates complications for American AI companies, which dominate the global industry. On one hand, they are incentivized to develop rapidly under Trump’s deregulation. On the other, they must comply with EU laws prohibiting certain uses of AI (e.g., social scoring or behavioral manipulation) and imposing heavy oversight on applications in health, education, employment, and finance.
While these firms could opt to avoid launching products in the EU, that would mean abandoning the world's second-largest market. Additionally, other countries may follow the EU’s regulatory model, as happened with GDPR.
For international companies incorporating AI into their operations, regulatory inconsistency presents even greater challenges. Requirements differ across markets and shift frequently. Complying with this patchwork of evolving rules—especially when deploying internal tools across global workforces—can significantly delay innovation and rollout.
3. Skilled Labor Shortage
In April, U.S.-based Anthropic launched a campaign to recruit 100 AI engineers in Europe, targeting development centers in Dublin, London, and Zurich. OpenAI announced plans in February to open a Munich office, adding to those it’s already established in London, Dublin, Paris, and Brussels. Microsoft opened a London-based AI hub last year. In response, European firms are scrambling to retain their senior talent.
The driver behind this activity isn’t funding—AI companies raised $110 billion in 2024, up 33% from the previous year. OpenAI alone secured $40 billion in March, the largest fundraising round in history, at a $300 billion valuation. What these companies lack is skilled, specialized talent.
According to a Bain & Co. report published in March, while AI job postings have increased by 21% annually since 2019 and salaries by 11% per year, the supply of qualified candidates hasn’t kept pace. In the U.S., 1.3 million AI jobs are expected to emerge in the next two years—but only 645,000 workers will be available. Germany is on track to miss 70% of its AI talent needs by 2027, and the UK about 50%. Even in India, only half of the projected 2.3 million AI jobs can be filled.
As a result, U.S. companies are aggressively recruiting in Europe and elsewhere, offering top-tier salaries and benefits. OpenAI, for example, offers over €400,000 to researchers and €500,000 to senior engineers—rates that startups can’t match.
Partnerships with academia may offer a temporary solution by identifying rising talent, but broader market forces are expected to bring long-term balance. High pay, good conditions, and the allure of AI will drive more students into relevant fields. Still, it may take years for supply to catch up—by which time today’s market leaders may have already consolidated their dominance.
4. The Next Breakthrough Will Be Harder
The early successes of large language models (LLMs) stemmed from abundant web data, iterative user feedback, and increasing computing power. But those initial gains are becoming harder to replicate. “The low-hanging fruit is gone. The hill is steeper,” Google CEO Sundar Pichai warned in December.
This doesn’t mean breakthroughs are over—but they will demand greater effort and yield smaller improvements. “You’re definitely going to need deeper breakthroughs as we get to the next stage,” Pichai said.
OpenAI experienced this challenge in developing GPT-4.5, which saw limited performance improvement over its predecessor. Google’s Gemini fell short of internal expectations, and Anthropic delayed model launches due to similar obstacles.
Companies are experimenting with new training techniques. China’s DeepSeek shocked the industry in January by showing it could produce models comparable to leading systems with far less compute. While its approach doesn’t necessarily enable the next leap forward, it may free up resources to pursue novel directions.
This doesn’t imply a halt in innovation—but it does demand more creativity, more investment, and a recognition that easy wins are over.
5. Legal and Ethical Risks
Modern AI’s ability to mimic human expression has triggered a wave of lawsuits. Writers, artists, publishers, and media outlets have filed claims against OpenAI, Meta, and others for allegedly training their models on copyrighted material without permission.
These legal battles are accompanied by growing ethical concerns. AI “hallucinations”—the generation of false or misleading information—can have real-world consequences. In 2023, a chatbot deployed by New York City advised companies to violate local laws. That same year, Air Canada had to honor a free-flight promise made erroneously by its chatbot.
Longstanding issues like the “black box” problem (i.e., not knowing how a model arrives at its conclusions), bias, and discriminatory outputs further contribute to mistrust—particularly among institutional users.
6. The Future of Humanity
In December 2024, researchers at Anthropic published troubling findings: one of their models, Claude, successfully deceived developers during training to avoid code changes. “It’s harder than we thought to align AI models with human values,” a researcher explained. “You need ways to ensure they do what you want—not just appear to.”
This experiment highlights perhaps the most profound challenge: the long-term risks AI poses to human society. While apocalyptic scenarios—like a self-aware AI targeting humanity—may be unlikely, less extreme outcomes could still be destabilizing. AI might displace workers across entire industries or make life-altering decisions in areas like healthcare, law, or finance.
Experts including Elon Musk, Sam Altman, and Geoffrey Hinton (the 2024 Nobel Prize winner in Physics) have repeatedly warned of these risks. Yet governments—particularly the Trump administration—are prioritizing innovation over safety. At the pace of current development, even short-term neglect could allow worst-case scenarios to materialize sooner than expected.