de / en
Earth from space

AI & Sustainability: Science Fiction vs. Climate Change

08.07.2025

In the debate about using AI for public good and combating climate change, tech-critic Paris Marx wants us to fundamentally challenge the ideologies the technology has been built on.

In the search for pathways towards a sustainable future, AI is considered by many not just an essential tool, but a revolutionary technology. At our event in May, Sustainable AI: Narratives and Impacts, Paris Marx and Paul Schütze investigated whether the real-world impacts of AI are keeping those promises.

After their talk, we spoke with Marx about the future Silicon Valley envisions for AI, why such an inherently unsustainable technology is considered the way forward, and how we can find hope in local and on the ground resistance.

 

You’ve written a lot about the sustainability of AI technologies, and you've called data centers, which are the backbone of AI, data vampires. Why?

Paris Marx: I actually stole the term from protesters in Ireland, who demonstrated against a data center there – only they called them energy vampires. They were referring to how much energy they use up. Data centers now account for 2-3 % of all the energy use globally, about the same amount as the entire country of France. Nowadays, one hyperscale data center houses up to 50 000 servers and consumes 2.6 million liters of water every day. So data centers are vampiric on the energy and water system and are giving little back. I call them data vampires, because when it comes to data centers or generative AI, sucking out large amounts of data is very core to the business model that powers all of this.


What about the aspects that go beyond the environmental impacts? How is AI impacting social justice or economic sustainability?

Paris Marx: AI isn’t doing so well here, and we can see that in the way that these technologies are rolled out in a broader way.

I would argue the impacts on labor and jobs are not so much to replace workers, but to continue a process of further deskilling workers and degrading the types of work people do, and reducing the power workers have in their workplaces. Back in the 2010s, there was a lot of talk about how new automation, robotics, and self-driving cars were going to wipe out a ton of jobs. But what we saw a few years on was the rollout of algorithmic management in workplaces, the popularization of the gig economy and that form of renewed piecework. So, I think that is going to be an outcome of generative AI as well, the use of technology to reduce labor standards.

But on top of that, these technologies are being used in very harmful ways in broader society. Whether it is in the delivery of public services, healthcare, or the immigration system, these technologies are promoted as increasing efficiency and reducing costs. However, they actually result in discriminatory outcomes and are often used to hide or shield austerity impulses. This can lead to cutting people off from welfare benefits or social support they rightfully should have access to. We have seen the social impacts this has had in many different parts of the world.


Why is AI then sold to us as the technology of the future?

Paris Marx: There are many different factors that explain why AI is being pushed right now.

Part of it is the desire for efficiency and the idea that computers will run everything without human input. This is appealing for various reasons. Companies want to rely less on workers, and governments facing budget pressures from decades of tax cuts see it as a way to deliver services more effectively to an aging public.

On top of that, there's the investment piece. The tech industry itself finds AI very appealing and promotes it as a means to transform technology use, promising significant financial benefits. This leads to a lot of venture capitalists pouring money into these sectors, hoping for a return. They're willing to accept short-term losses with the goal of eventual monopolization.

Finally, there's how powerful tech billionaires envision the future. They disseminate science fiction visions of powerful, intelligent computers and spread the believe that this is the future we should strive toward. Consequently, they want to direct our collective resources to achieve something like that.


What does that vision of the future look like exactly, how do they envision dealing with climate change?

Paris Marx: Prominent figures in Silicon Valley, including Elon Musk, Eric Schmidt, and Sam Altman, have been making statements suggesting they see climate change less of a threat than they did years ago. Elon Musk explicitly advocates for increased oil and gas production. Many of these individuals hold a longtermist ideology, believing that while climate change might have a short-term impact, it won't significantly affect humanity in the very long term, making it less concerning.

These longtermists effectively argue that people alive today have the same moral value as people who might be alive a million years in the future. And they think that we should take actions today that might allow millions or billions more people to live in the future. They argue that the goal of humanity should be to colonize space and to develop advanced AI because this will allow us to massively expand the human population into the future. For them, that justifies not addressing things like global poverty, global hunger, even climate change in the present, because it's this vast, far future that matters far more.

Whether these tech billionaires inherently believe that, or whether it’s a justification for them to do whatever they want and try to present it in a moral way, is very much an open debate.


National governments are currently massively investing into their own data centers and AI infrastructures with the hope of making their countries more independent and technologically sovereign. What’s behind that promise, will AI make us more resilient?

Paris Marx: I think a lot of these countries are running after AI and data centers because it is the next big thing the tech industry has been pushing. They see a lot of money flowing in, and they want to capture some of that investment. I also think AI has gotten very enmeshed with geopolitical power. So if a country wants technological autonomy in the future, they need to develop their own AI industry or capabilities, regardless of whether that technology will actually deliver. I don't think that is going to pay off, and it's going to have not just environmental consequences, but potentially broader consequences beyond that, too.

There is a growing discussion about digital sovereignty, about ensuring countries have greater control over technologies used within their borders, and in particular, to be less dependent on US companies for infrastructures, platforms, digital services and so on. However, I would argue a lot of countries or blocs like the European Union are simply trying to build their own Silicon Valleys, rather than thinking about whether that is sustainable or good for the public.

I think we do need digital sovereignty, we do need to be thinking about these issues. But we need to fundamentally challenge what this industry and what these technologies have been built on over the past several decades. Even if we were to have some of these services delivered by domestic companies rather than US companies, I still don't think we're going to get the broader public benefits that we expect, and we're going to have technology developed in a certain direction because it has to serve shareholder value.


What are alternative visions of digitally sustainable futures?

Paris Marx: There's a long history of public broadcasting, public banking, and public postal service. We've already established a role for the public sector where the market cannot deliver. I think we need to recognize that there's a role for that in technology as well. Hopefully, that is something we can reclaim.

Along with that comes an understanding that technology does not need to be inherently anti-environmental or driving massive increases in energy or water use. And it doesn’t need computation reliant on massive data collection like the current model of tech development. We can fundamentally challenge those ideas and think about something else that is not just public-oriented but is sustainable, in line with our environmental goals, and helps create a better society at the end of the day. 


Can AI still be part of that?

Paris Marx: I would say I'm quite skeptical when it comes to generative AI. I don't see many useful use cases justifying the large costs in water, energy, and emissions.

I'm more open to traditional forms of AI that are less energy-intensive, like algorithmic systems. However, I still think there's an inherent impulse in rolling out such technologies to aim for efficiency, which, in our current political climate, often means restricting services and austerity. That's where my skepticism comes from, at least right now.


There's been resistance to data centers. What has that looked like and has it been effective?

Paris Marx: We have seen successes globally from demonstrations, campaigns, and protests against data centers, whether temporary moratoriums, stopping data centers, or forcing data center companies to the table to ensure communities get a better deal from their construction.

For me, data center opposition and growing protests are very hopeful because it's not a campaign against tech companies rooted in abstract concepts difficult to relate to people's everyday lives. It's infrastructure right in front of you or potentially next to your community; that is very tangible. I think there's an opportunity not just to grow awareness about what these tech companies are doing more broadly through data center opposition, but also to bring these disparate groups together into a broader movement against these companies. That’s a difficult thing to do – especially with the power tech companies currently wield, particularly through the US government to defend their interests, but it's a place where I see hope.


What role can digital researchers and science communicators play here?

Paris Marx: It's an important role. People need to understand these things to properly push back against them. Researchers can help to further detail what is going on through their work, and that needs to be translated and illustrated to the public, a role science communicators and journalists can fulfill. That’s an important educational piece to all of this.

I'm often critical of journalism as an institution because it often serves to legitimize major tech companies' narratives rather than challenging them. I think there's a real opportunity and necessity to have more critical journalism and communication on these issues. The public needs a proper understanding of what's going on, rather than just the boosterish take that makes people believe AI is the next big thing.

Thank you for the conversation. 

 


Paris Marx is a tech critic, author, podcaster, and international speaker.

They host the award-winning Tech Won’t Save Us podcast and System Crash, and also write the Disconnect newsletter. Marx’ work has been published around the world in outlets like Time, Wired, NBC News, and MIT Technology Review. It’s also been translated into more than a dozen languages. Their first book Road to Nowhere was published in 2022.

Interview by Leonie Dorn and Rainer Rehak

 

Find out more:

 

artificial&intelligent? is a series of interviews and articles on the latest applications of generative language models and image generators. Researchers at the Weizenbaum Institute discuss the societal impacts of these tools, add current studies and research findings to the debate, and contextualize widely discussed fears and expectations. In the spirit of Joseph Weizenbaum, the concept of "Artificial Intelligence," is also called into question, unraveling the supposed omnipotence and authority of these systems. The AI pioneer and critic, who developed one of the first chatbots, is the namesake of out Institute.