Experts suggest that the most effective way to ensure AI safety might be to regulate its “hardware” – the chips and data centers, or “compute,” that power AI technologies. 

The report, a collaboration among notable institutions, including the University of Cambridge’s Leverhulme Centre for the Future of Intelligence and OpenAI, proposes a global registry to track AI chips, setting “compute caps” to keep R&D balanced across different nations and companies. 

This novel hardware-centric approach could be effective due to the physical nature of chips and data centers, making them more workable to regulate than intangible data and algorithms. 

Haydn Belfield, a co-lead author from the University of Cambridge, explains the role of computing power in AI R&D, stating, “AI supercomputers consist of tens of thousands of networked AI chips… consuming dozens of megawatts of power.”

The report, with a total of 19 authors, including ‘AI godfather’ Yoshio Bengio, highlights the colossal growth in computing power required by AI, noting that the largest models now demand 350 million times more compute than they did thirteen years ago. 

Authors argue this exponential increase underscores the critical need for governance to prevent centralization and AI from getting out of control. Given the insane power consumption of some data centers, it could also reduce AI’s burgeoning impact on energy grids. 

Professor Diane Coyle, another co-author, points out the benefits of hardware monitoring for maintaining a competitive market, saying, “Monitoring the hardware would greatly help competition authorities in keeping in check the market power of the biggest tech companies, and so opening the space for more innovation and new entrants.”

Drawing parallels with nuclear regulation, which others have used as a model for AI regulation, the report proposes policies to enhance the global visibility of AI computing, allocate compute resources for societal benefit, and enforce restrictions on computing power to mitigate risks.

Belfield encapsulates the report’s key message, “Trying to govern AI models as they are deployed could prove futile, like chasing shadows. Those seeking to establish AI regulation should look upstream to compute, the source of the power driving the AI revolution.”

Multilateral agreements like this really do need global cooperation, which, for nuclear power, was brought about through large-scale disasters. 

There were relatively few nuclear power incidents after the International Atomic Energy Agency (IAEA) was founded in 1957 until Chornobyl. 

Now, planning, licensing, and building a nuclear reactor can take ten years or more, because the process is rigorously monitored at every juncture. 

The burning questions are: who will lead a central agency that limits chip supply? Who is going to mandate the agreement? Can it be enforced?

And how do you prevent those with the strongest supply chains from benefitting from restrictions on their competitors?

What about Russia, China, and the Middle East? It’s easy to restrict chip supply while China relies on US manufacturers like Nvidia, but this won’t be the case forever. China is aiming to be self-sufficient in terms of AI hardware in this decade.

The 100+ page report provides some clues, and this seems like an avenue worth exploring. 

It just feels like comparing AI to nuclear power is to concede that only a major disaster will turn safety sentiments into reality.