Blog

Thinking

Responsible AI: we've been here before

by 

Max Golby

November 6, 2025

As the Salt team will tell you, I'm somewhat enthralled by AI. ‘Obsessed’ is probably fairer. Truth be told, I've always been a massive nerd - and I've never been more comfortable owning it. I was building my own PCs in my early teens and queuing outside the Apple Store for every new iPhone. Then I had two young children, and suddenly queuing for iPhones felt less essential.

Yet despite my lifelong fascination with technology, it has never been central to my work - until now. So, what's changed?

Part of it is simply the intrinsic awe and wonder - I won't lie. Whether it's a new model drop or the latest breakthrough in text-to-speech, we've all experienced that pin-drop moment. But many of those examples focus on the 'what', not the 'why'. Looking below the surface, what draws me to AI isn't the technology itself, but its extraordinary, transformative potential. I know it's almost cliché to observe it at this point, but AI unavoidably represents the most significant technological shift since the Industrial Revolution. The breadth of its potential applications is seemingly endless - from changing the ways we all work (and live) to addressing some of humanity's most pressing challenges. And remarkably, today's capabilities may pale in comparison to what's coming.

But as AI's reach expands, so do the stakes. The same systems that promise extraordinary benefits also risk unintended consequences - amplifying inequalities, outpacing policy and oversight, and yes, straining environmental resources1.

Potential that's hard to exaggerate

But let's start with the (really) good stuff.

We've all seen it - AI is already demonstrating extraordinary potential to address some of humanity's most pressing challenges, from revolutionising healthcare diagnostics to optimising energy systems and accelerating scientific discovery. The scale of possibility is hard to exaggerate.

Take the climate. Recent research from the Grantham Research Institute at the London School of Economics underscores the opportunity: with proper oversight and collaboration, AI could ultimately reduce global greenhouse gas emissions by as much as 5.4 billion tonnes per year by 2035, more than offsetting its own energy footprint and accelerating progress towards net zero. In this sense, AI can become a powerful catalyst for climate action, rather than an impediment, but this outcome depends entirely on the choices we make today1.

In healthcare, a world-leading NHS trial is using AI to help radiologists detect breast cancer earlier and more accurately, potentially saving thousands of lives and reducing pressure on healthcare systems2. Indeed, in the US, the FDA has already approved AI-powered tools that can predict a woman's breast cancer risk years in advance3, and a large-scale real-world study in Germany showed that integrating AI into mammography screening can increase cancer detection rates by over 17%, while also reducing radiologists' workload4.

So, hopefully you're clear by this point - I really do like AI. It actually deserves the superlatives. It will change the world in ways we still can't imagine, but that childlike excitement needn't blindside us to the risks, or stop us looking ahead.

Repeating the same mistakes: Humanity's favourite pastime

Looking at the world in 2025, it's hard not to feel an impending sense of déjà vu. With climate change, we ignored the warnings until the consequences became undeniable - waiting decades to act on what scientists told us in the 1980s and 1990s. Are we about to make the same mistake with AI?

The irony is striking: AI remains so under-regulated that even its leading architects have called for more government oversight. At a 2023 Congressional hearing, OpenAI CEO Sam Altman stated: "We think that regulatory intervention by governments will be critical to mitigating the risks of increasingly powerful [AI] models." His enthusiasm for regulation has since waned, but perhaps those earlier words were his most prescient?5,6

Objectively, AI's carbon footprint today is very small. But is it just today that we're interested in? Cars would once have been dismissed as negligible contributors to global emissions and nothing to worry about. Today, road transport accounts for about 12% of total global greenhouse gas emissions7. By comparison, AI's present-day share of global electricity use is still extremely small, but the scale of what's around the corner is staggering. For example, while data centres being built today range from 1,000 to 2,000 megawatts (equivalent to one-and-a-half to two-and-a-half times San Francisco's entire energy demand), OpenAI has already drafted plans for supercomputers requiring 5,000 megawatts, matching the average demand of New York City8. Based on these current expansion rates, the energy we'll need to add to the global grid by decade's end is equivalent to adding two to six new Californias8. Taking a more global view, McKinsey expects AI-related data centres to require $5.2 trillion in investment by 2030, with global capacity projected to nearly triple9,10. Given this scale, dismissing AI's footprint as trivial - or avoiding the conversation entirely - is the definition of short-term thinking. If we wait until the environmental impact is overwhelming, we'll repeat history's mistakes: recognising and responding to a problem when it's already too late.

The environmental and human footprint we can't ignore

Even in the present day, its effects aren't negligible. Take the example of water usage. As it stands, data centres are typically cooled with fresh water resources8, and the scale of consumption is striking. A Washington Post investigation with University of California Riverside found that generating a single 100-word email with GPT-4 can consume over half a litre of water - just to cool the servers running the model11,12. Multiply that by millions of users and prompts each day, and the environmental impact isn't entirely trivial - even at a statistical level.

Putting aside statistics, what about the human impact? Take Arizona, for example, where water scarcity is already a pressing issue. The rapid expansion of data centres there is raising concerns about resource strain and long-term sustainability. Both national and local reporting has documented how data centre projects have drawn backlash from residents worried about water use, energy demands, and questioning whether the promised economic benefits will actually materialise13,14,15. Some developers are now promising new low-water cooling technologies, but community scepticism remains high16,17.

And then there's the human labour itself, often hidden from plain sight. You know that whole 'model training' or 'data labelling' piece that's essential to bringing a Large Language Model (LLM) to market? Ever wondered who actually does that? As former OpenAI engineer turned journalist Karen Hao writes, "AI is not magic and it actually requires an extremely large amount of human labor and human judgment to create these technologies." AI companies contract workers in Global South countries for very low wages to annotate data, perform content moderation, and teach models to provide helpful responses8.

The reality of this work is precarious at best. Al Jazeera has documented data labellers in Kenya who wake at 2am to check if tasks have been uploaded to apps, racing to claim work before others in a country facing high youth unemployment18. The jobs offer no stability, no benefits, and wages as low as $2 per hour - even as the tech companies contracting them earn vastly more per worker19.

But the human cost goes far beyond low pay and job insecurity. Hao travelled to Kenya to speak with workers OpenAI had contracted to build a content moderation filter. These workers were exposed to deeply disturbing content and "were completely traumatised and ended up with PTSD for years after this project, and it didn't just affect them as individuals; that affected their communities and the people that depended on them"8. CBS' 60 Minutes has similarly documented data labellers reviewing hours of graphic violence, child abuse, and extreme pornography for Meta and OpenAI - work that left them psychologically damaged but with inadequate mental health support19. These are the real people underpinning the AI revolution, and their struggles are a stark reminder that the benefits of AI are not evenly shared - and that its costs are often borne by those with the least power to refuse them.

Performance vs. responsibility - a false dichotomy

So, what does this mean for us? And will responsible usage kill the fun or slow us down?

Well, it turns out that most daily AI tasks - like summarising emails, extracting bullet points, or drafting simple content - don't require the most advanced, resource-hungry ‘reasoning’ models. Using a default model for everyday prompts can save vast amounts of water and energy compared to advanced reasoning models, which, according to MIT Technology Review, can require up to 43 times more energy for the same simple tasks20. If you wouldn't leave the tap running all day to fill a single glass, don't default to the highest-compute model for your question about that spot on your chin or your forthcoming trip to Greece.

And this needn't take radical behaviour change or stop all the fun. It's just about being vaguely conscious of the cumulative impact of your digital habits, just as most of us have for years with something as basic as recycling. And by all means, get stuck in - test out those more 'advanced' models. I know I do. Experimentation is often the trigger for deep learning and innovation for so many of us and we need to strike the right balance. Just remember - you don't need 'GPT-5 Pro' to draft an awkward email to a client, or to summarise your last meeting.

In fact, appropriate use of AI isn't just a win for sustainability; it's often a win for performance and accuracy too. Peer-reviewed research in Nature and technical reports from OpenAI show that larger, more complex AI models can have significantly higher ‘hallucination’ rates when used irresponsibly - meaning they're more likely to generate false or irrelevant information than their smaller predecessors21,22. In many everyday instances, a simpler model is not only greener but more reliable. This principle holds especially true with something like Retrieval Augmented Generation (RAG), where smaller, focused models paired with strong retrieval systems can outperform larger models on precision and speed, at the same time as constraining compute and saving energy.

But there's perhaps a broader question worth considering too: just because we can do something with AI, does that mean we should? Take the recent craze of turning yourself into an 'AI action figure' or generating endless variations of your face as a Renaissance painting. Taken in isolation, and at risk of appearing a complete misery, this is usually pretty frivolous (if reasonably good fun). It's also a silly amount of compute once you extrapolate your usage across the entire global user base of ChatGPT. But we've already acknowledged experimentation as a key driver of innovation. So how do we square that circle? At least in part, this comes down to the intentionality of our usage. Repeatedly mucking about with an AI image generation tool 'just for a laugh' is increasingly hard to defend. But what if you're simultaneously testing techniques, approaches, or ideas that could be relevant for clients? We don't need to stop experimenting (if anything, the average user probably needs to do more). I'm just increasingly trying to ask myself: how might this technique or tool I'm testing have relevance to something of actual value? More often than not, there will be a connection, and it won't require a giant leap to get from concept to value. Thinking this way from the start naturally drives us to identify potential applications rather than just 'doing the thing' because we can. A little intentionality goes a long way, and curiosity paired with purpose is far more rewarding than consumption for its own sake.

The bottom line is, responsible AI doesn't require radical behaviour change - just a shift from "maximum power, always" to "right tool, right job." History shows us that the cost of inaction and overuse always comes due, sooner or later. We have the chance, right now, to choose a path that's both innovative and responsible - and to have a load of fun along the way.

Keywords

Responsible AI, AI sustainability, environmental impact of AI, AI ethics, data centre energy consumption, AI carbon footprint, ethical AI use, AI and climate change, AI regulation, sustainable technology, AI water usage, responsible technology use, digital sustainability, AI best practices, tech industry accountability, AI environmental cost, machine learning ethics, conscious AI usage

References

  1. Grantham Research Institute (2024). New study finds AI could reduce global emissions annually by 3.2 to 5.4 billion tonnes of carbon dioxide equivalent by 2035. https://www.lse.ac.uk/granthaminstitute/news/new-study-finds-ai-could-reduce-global-emissions-annually-by-3-2-to-5-4-billion-tonnes-of-carbon-dioxide-equivalent-by-2035/
  2. UK Government. World-leading AI trial to tackle breast cancer launched. https://www.gov.uk/government/news/world-leading-ai-trial-to-tackle-breast-cancer-launched
  3. Breast Cancer Research Foundation. Clairity Breast AI: Artificial Intelligence mammogram approved. https://www.bcrf.org/blog/clairity-breast-ai-artificial-intelligence-mammogram-approved/
  4. Nature (2024). Real-world study shows AI integration in mammography screening increases cancer detection rates. https://www.nature.com/articles/s41591-024-03408-6
  5. BBC News (2023). Sam Altman Congressional hearing on AI regulation. https://www.bbc.co.uk/news/live/world-us-canada-65610337
  6. Wired. Sam Altman, AI regulation, and Trump. https://www.wired.com/story/plaintext-sam-altman-ai-regulation-trump/
  7. Rhodium Group (2024). Global greenhouse gas emissions 1990-2022 and 2023 estimates. https://rhg.com/wp-content/uploads/2024/11/Global-Greenhouse-Gas-Emissions-1990-2022-and-2023-Estimates.pdf
  8. Reuters (2025). Karen Hao: How the AI boom became the new imperial frontier. https://www.reuters.com/lifestyle/karen-hao-how-ai-boom-became-new-imperial-frontier-2025-07-03/
  9. McKinsey. The cost of compute: A $7 trillion race to scale data centers. https://www.mckinsey.com/~/media/mckinsey/industries/technology%20media%20and%20telecommunications/telecommunications/our%20insights/the%20cost%20of%20compute%20a%207%20trillion%20dollar%20race%20to%20scale%20data%20centers/the-cost-of-compute-a-7-trillion-dollar-race-to-scale-data-centers.pdf
  10. IFRI. AI, data centers and energy demand: Reassessing and exploring trends. https://www.ifri.org/en/papers/ai-data-centers-and-energy-demand-reassessing-and-exploring-trends-0
  11. Washington Post (2024). Energy use of AI: Electricity and water consumption in data centers. https://www.washingtonpost.com/technology/2024/09/18/energy-ai-use-electricity-water-data-centers/
  12. Li, P., Yang, J., Islam, M.A., and Ren, S. (2025). Making AI less ‘thirsty’. Communications of the ACM, 68(7), 54-61. https://doi.org/10.1145/3724499
  13. Circle of Blue (2025). Data centers: A small but growing factor in Arizona’s water budget. https://www.circleofblue.org/2025/supply/data-centers-a-small-but-growing-factor-in-arizonas-water-budget/
  14. AZ Central (2025). Arizona data centers could threaten environment. https://www.azcentral.com/story/money/business/tech/2025/08/04/arizona-data-centers-could-threaten-environment/85477768007/
  15. Bloomberg (2025). AI impacts: Data centers, water, and data. https://www.bloomberg.com/graphics/2025-ai-impacts-data-centers-water-data/
  16. AZ Luminaria (2025). Project Blue data center pushes ahead in Pima County, promising new low-water cooling tech. https://azluminaria.org/2025/09/17/project-blue-data-center-pushes-ahead-in-pima-county-promising-new-low-water-cooling-tech/
  17. Nixon Peabody (2025). Water use in US data centers: Legal and regulatory risks. https://www.nixonpeabody.com/insights/articles/2025/09/05/water-use-in-us-data-centers-legal-and-regulatory-risks
  18. Al Jazeera (2024). In rural Kenya, young people join AI revolution. https://www.aljazeera.com/features/2024/2/3/in-rural-kenya-young-people-join-ai-revolution
  19. CBS News. 60 Minutes: AI work in Kenya and exploitation. https://www.cbsnews.com/news/ai-work-kenya-exploitation-60-minutes/
  20. MIT Technology Review (2025). AI energy usage: Climate footprint of big tech. https://www.technologyreview.com/2025/05/20/1116327/ai-energy-usage-climate-footprint-big-tech/
  21. Nature (2024). Research on AI hallucination rates. https://www.nature.com/articles/s41586-024-07421-0

New Scientist (2024). AI hallucinations are getting worse and they’re here to stay. https://www.newscientist.com/article/2479545-ai-hallucinations-are-getting-worse-and-theyre-here-to-stay/