

For a while, A.I. enjoyed the rare privilege of being sold as everything all at once: a productivity tool, a growth story, a labor-saving device and, by implication, a cleaner, smarter way of running a business. It was the corporate miracle diet. It helped cut costs, increase output, automate the boring bits and, somehow, remain comfortably on track for net zero.
Miracles, however, tend to look more expensive when the electricity bill arrives. As the power demands of A.I. infrastructure become harder to ignore, an awkward reality is moving from the margins into the boardroom: the economics of A.I. and the politics of sustainability are no longer neatly aligned.
That tension is now visible in plain numbers. Google’s latest environmental reporting showed greenhouse gas emissions (GHG) up 48 percent against its 2019 base year, driven in part by data center energy use and supply chain growth linked to its A.I. expansion. Microsoft reported total GHG Scope 1, 2 and 3 emissions were up 29.1 percent against its 2020 baseline, with much of the increase due to the infrastructure build required for new A.I. services.
Amazon, meanwhile, reported a rise in emissions associated in part with construction and expansion across its estate, including the building of data centers. For years, the digital economy liked to present itself as airy and weightless. It turns out the cloud is made of concrete, copper, water and a great deal of power.
And more of that power is going to be needed as we do more with A.I. The International Energy Agency (IEA) recently reported that electricity use from data centers is on course to more than double by 2030, with A.I. the leading cause of that increase. In the U.S., power demand is forecast to hit record levels as data center load rises. Some of that electricity will come from renewables, of course, but a significant share of it will not. As demand bites, the risk is not simply that A.I. consumes more energy. It is that it helps lock in dirtier energy for longer, because the commercial imperative for a reliable supply arrives before the green grid does.
This is where the corporate script begins to wobble. Almost every large company now wants to talk about two things at once: its enthusiasm for A.I. and its commitment to sustainability. In theory, the two can coexist. In practice, they are starting to reluctantly share space like a bonfire at a climate summit.
Yet most boards are not discussing A.I. principally as an environmental problem but as a governance problem. And from their vantage point, that concern makes perfect sense. The live fear in boardrooms is not that a chatbot has used too much water in Cheshunt. It is that an employee has pasted confidential material into a public large language model (LLM), breached the company’s data policy, exposed client data or created legal trouble at industrial scale. The immediate concern is control: who is using what, on which systems, with what information and under whose authority.
That is why so many CIOs and CTOs now issue a different version of the same command: use the approved enterprise tool, not the latest consumer toy. Keep the data inside the fence—use our ‘private LLM’. Stop staff freelancing with ‘shadow A.I.’ in the same way earlier generations were told to stop storing files on personal USB sticks or forwarding work to private email accounts. Governance is where the heat is because governance is where the liability is.
But there is a trick being played here, mostly unintentionally. By treating A.I. chiefly as a problem of security, compliance and policy, companies can feel they are managing it responsibly while leaving its environmental costs largely unexamined. If the model is licensed properly, the permissions are tidy and the prompts are happening in a sanctioned enterprise environment, then the organization can tick the governance box. What it cannot do so easily is explain away the resource intensity of the infrastructure underneath.
That matters because many businesses are still approaching A.I. with the intellectual seriousness of a gold rush. They know they need an A.I. strategy in the same way previous generations knew they needed a “digital transformation strategy,” whether or not anyone could say exactly what problem was being solved in the process. The result is predictable: too many projects, too little discipline and an instinct to reach for the largest possible technical answer before exhausting the smaller, cheaper and less energy-hungry ones.
The answer? Most organizations should begin with the use case, not with the model. What is the decision, workflow or reporting burden that actually needs improving? What data is required? Can it be governed properly? Is a full generative A.I. layer necessary or would a simpler analytics, retrieval or rules-based approach do the job better? That sounds almost offensively sensible, which is precisely why it is useful.
This perspective is shaped in part by our roots in ESG analysis, including work spotting misrepresentation in the performance figures published by listed companies. That background offers a revealing perspective on today’s A.I. boom. The next problem may not be whether companies are using A.I. at all. It may be whether they are overstating the upside while understating the environmental overhead. Corporations are very good at declaring efficiency gains. It is less enthusiastic about discussing its rapidly increasing power and water bills.
None of this suggests that A.I. is a sham or that businesses should retreat into candlelight and spreadsheets. Used well, A.I. can absolutely reduce waste, improve reporting, streamline operations and help companies make better decisions. But it is not magic. It is infrastructure. And infrastructure demands gas.
The companies that can navigate this balancing act successfully will not be the ones with the loudest rhetoric about transformation. They will be the ones with the discipline to ask a slightly unfashionable question before scaling yet another workload: Is our next use of A.I. genuinely worth what it costs? That is not anti-innovation. It is what human-in-the-loop supervision should look like.

