A symbolic representation of AI development at a crossroads, with one path leading to ethical, sustainable progress and another to unchecked growth

Why Stargate's $500B AI Vision Worries Me

A critical look at the world's largest AI initiative and its implications for humanity, climate, and social justice

Tassilo Weber
Tassilo Weber
Founder of Climestart
January 24, 2025

People who know me know that I'm not easily worried. But Stargate worries me deeply. It's been a few years since I read Superintelligence by Nick Bostrom, but I don't think the concerns raised there have lost any relevance. If there's one thing we can say about AI, it's that it will change the world in ways that dwarf the impact it has already had—and no one knows what that change will look like. Therefore, it matters profoundly what intentions its creators have in mind.

Stargate is a plan to pump $500 billion into AI, putting that power into the hands of imperialist monopolists while removing AI regulations without hesitation, all in the sole pursuit of economic and geopolitical dominance. I don't see climate, nature, or social justice playing any significant role in that setup.

A $500 Billion Bet on an Uncertain Future

To understand why Stargate—the largest AI initiative ever announced—raises so many red flags, it's worth diving into its implications. The project represents a massive injection of capital into AI development, ostensibly to ensure that the U.S. maintains global leadership in artificial intelligence. While the ambition to remain a technological leader may seem reasonable on the surface, the lack of guardrails is deeply troubling. Critics of the project, such as AI ethicists and even prominent thought leaders, have pointed out that removing regulatory oversight in such a high-stakes field could be catastrophic. For comparison, consider the Manhattan Project: it was an endeavor driven by urgency, but at least its goals were clear—and even then, the moral and ethical repercussions still haunt humanity.

Stargate's deregulation strategy, which dismantles existing AI safety frameworks, essentially hands over the keys to a potentially world-altering technology without ensuring its safe or ethical use. These frameworks were designed to protect society from the unintended consequences of AI, ranging from biased algorithms to existential risks. Removing them in favor of unchecked progress ignores the lessons we've learned from decades of technological development.

The Concerns Raised by Bostrom, Gawdat, and Harari

When Nick Bostrom wrote Superintelligence, he highlighted the idea of an "intelligence explosion"—a point at which AI systems become so advanced that they can outpace human control. He warned that once such systems exist, they may act in ways that are not aligned with human values. Bostrom's work has inspired global conversations about the need for strong safeguards and ethical considerations in AI development. With Stargate, however, we see the opposite approach: a race to innovate without clearly defined boundaries or accountability mechanisms.

Mo Gawdat, former Chief Business Officer at Google [X], has repeatedly voiced concerns about AI's rapid advancement. He once remarked that AI development feels like "releasing a new species into the world," with unpredictable outcomes. Gawdat emphasizes that the creators of AI must prioritize humanity's well-being, rather than short-term profits or nationalistic agendas. Stargate, however, appears to be firmly rooted in the latter motivations, focusing on economic supremacy rather than ethical responsibility.

Yuval Noah Harari has taken a broader societal view, cautioning that AI could erode democracy, manipulate information, and create vast inequalities. He's argued that without proper oversight, AI might serve as a tool for authoritarian regimes or corporations to control populations. Harari's insights are particularly relevant when examining Stargate, as the project's deregulated nature opens the door for misuse by those in power.

What's Missing from the Stargate Vision

One of the most glaring omissions in Stargate's framework is any meaningful consideration of climate, nature, or social justice. AI has the potential to be a powerful force for good in addressing global challenges like climate change, biodiversity loss, and social inequality. Yet, Stargate's design seems narrowly focused on economic and geopolitical gains. This lack of broader vision is a missed opportunity—and a dangerous one.

Imagine if $500 billion were directed toward AI solutions for renewable energy optimization, carbon capture, or equitable healthcare distribution. These are areas where AI could genuinely improve the world, aligning technological progress with ethical imperatives. Instead, Stargate's focus on deregulation and competition risks exacerbating existing inequalities and ignoring the planet's most urgent needs.

A Public That Cheers or Shrugs

What's baffling to me is that most public reactions to Stargate are either enthusiastic or only critical of the fact that the funds aren't secured. Some are even giving tips on how to improve it and lower barriers further, in hopes of making it as "successful" as the Manhattan Project. I do see parallels to the Manhattan Project—with the crucial difference being that this time, it's definitely not Nazi Germany building it first. And this time, no one knows what they are really building.

This public response reflects a troubling complacency. We've become so accustomed to celebrating technological advancements that we often fail to ask the hard questions: Who benefits from this? Who might be harmed? What unintended consequences could arise? The lack of critical engagement with Stargate's implications is a symptom of a broader issue—a society that prioritizes innovation over introspection.

AI That Is Pro-Human, Pro-Justice, Pro-Climate

I am absolutely pro-AI, but AI must also be pro-human, pro-justice, pro-climate, and pro-nature as overarching values. In this, the by-far largest AI project to date, I see a setup for the opposite development. That's why I'm worried. Stargate represents a pivotal moment in AI's evolution, and the choices made now will shape its trajectory for decades to come.

Bostrom, Gawdat, and Harari's warnings are not theoretical musings—they are urgent calls to action. If we ignore their insights, we risk creating a future where AI serves the few at the expense of the many, where progress is measured in GDP rather than human well-being, and where the planet pays the ultimate price.

artificial intelligenceethicstechnologyclimatesocial justiceregulationinnovation