A new report by the Information Technology and Innovation Foundation (ITIF) says that the narrative depicting AI’s energy consumption as out of control is overblown and often misleading.

The ITIF, a nonprofit think tank, released its report titled “Rethinking Concerns About AI’s Energy Use” and presented a reality check for alarmist AI energy and carbon emission statements.

The report noted that dramatic headlines about energy usage of new technologies are not a new phenomenon. When the dot-com era reached its peak in the 1990s, a Forbes article claimed, “Somewhere in America, a lump of coal is burned every time a book is ordered online.”

The report, which was widely cited, went on to claim that, “half of the electric grid will be powering the digital-Internet economy within the next decade.”

We know now that those estimates were wildly overblown. The International Energy Agency (IEA) estimates that the data centers and transmission networks that power the internet use between 1-1.5% of global electricity use.

We’ve previously reported on the huge water and energy resources that are consumed by training AI models and during inference but the ITIF report helps bring a little sanity to our initial panicked response.

Facts vs Fiction

Coming up with accurate figures for AI’s emissions and energy usage is a challenge. Besides CPU processing power, there are the energy resources attributed to chip manufacture, cooling, variable workloads, etc…

This makes it tough to get an accurate figure and easy to present a believable alarmist one.

In 2019 researchers at the University of Massachusetts Amherst estimated that training Google’s BERT model would have emitted 1,438 pounds of carbon dioxide (CO2) during 79 hours of training. That’s around 75% of the CO2 emissions of a roundtrip flight from

New York to San Francisco.

They also estimated that if, hypothetically, a model like BERT was trained for neural architecture search (NAS), one of the most computationally complex problems in machine learning, it would emit 626,155 pounds of CO2 emissions.

That’s the equivalent of around 300 roundtrip flights from the East to the West coast of the US. Guess which emissions figure made the headlines.

To make things worse, it turned out that the researchers’ estimation in that worst-case NAS scenario was overestimated by a factor of 88. Unsurprisingly, the correction to the report didn’t make the news.

It’s getting better, not worse

Yes, training AI models, and primarily inference, uses a lot of energy. However, the report noted that efficiencies in AI models and hardware will drive energy usage down over time.

Here’s a short version of the reasons given in the report:

As the incremental rate of improvement in AI models slows, developers will focus on making models more efficient to make them economically viable.
AI chips are getting more efficient. Between 2010 and 2018, there was a 550 percent increase in compute instances and a 2,400 percent increase in storage capacity in global data centers, but only a 6 percent increase in global data center energy use.
The substitution effects of AI should be considered. Downloading a book is more environmentally friendly than printing and delivering one. In similar ways, AI can eliminate higher carbon-emitting tasks. Humans emit a lot more carbon when typing a page of text than when having an AI generate it.
AI’s ability to make utility systems more efficient, process complex climate change data, enable precision agriculture, and optimize logistics all reduce carbon emissions.

While the report says AI energy use is less alarming than has been reported, it does call for energy transparency standards for AI models to make benchmarking easier.

The ITIF also concluded that excessive regulation of AI models may be making them less energy efficient as debiasing techniques for LLMs add more energy costs.

The report is worth reading in its entirety. It has more excellent examples that highlight how those opposed to accelerating AI development use misleading energy usage data to make their case.

It concluded by referring to a columnist in The Guardian who repeated the discredited 2019 BERT study in December 2023, two years after it was shown to be false and misleading. The problem isn’t going away.

Don’t believe everything the technophobes claim. Traveling by train will not destroy the human body, the internet doesn’t consume the majority of our electricity, and AI probably isn’t going to destroy the environment.