US National Security Experts Warn AI Giants Aren’t Doing Enough to Protect Their Secrets

Us National Security Experts Warn Ai Giants Aren't Doing Enough To Protect Their Secrets

Last year, the White House struck a landmark safety deal with AI developers that saw companies including Google and OpenAI promise to consider what could go wrong when they create software like that behind ChatGPT. Now a former domestic policy adviser to President Biden who helped forge that deal says that AI developers need to step up on another front: protecting their secret formulas from China.

“Because they are behind, they are going to want to take advantage of what we have,” said Susan Rice regarding China. She left the White House last year and spoke on Wednesday during a panel about AI and geopolitics at an event hosted by Stanford University’s Institute for Human-Centered AI. “Whether it’s through purchasing and modifying our best open source models, or stealing our best secrets. We really do need to look at this whole spectrum of how do we stay ahead, and I worry that on the security side, we are lagging.”

The concerns raised by Rice, who was formerly President Obama’s national security adviser, are not hypothetical. In March the US Justice Department announced charges against a former Google software engineer for allegedly stealing trade secrets related to the company’s TPU AI chips and planning to use them in China.

Legal experts at the time warned it could be just one of many examples of China trying to unfairly compete in what’s been termed an AI arms race. Government officials and security researchers fear advanced AI systems could be abused to generate deepfakes for convincing disinformation campaigns, or even recipes for potent bioweapons.

There isn’t universal agreement among AI developers and researchers that their code and other components need protecting. Some don’t view today’s models as sophisticated enough to need locking down, and companies like Meta that are developing open source AI models release much of what government officials, such as Rice, would suggest holding tight. Rice acknowledged that stricter security measures could end up setting US companies back by cutting the pool of people working to improve their AI systems.

Interest in—and concern about—securing AI models appears to be picking up. Just last week, the US think tank RAND published a report identifying 38 ways secrets could leak out from AI projects, including bribes, break-ins, and exploitation of technical backdoors.

RAND’s recommendations included that companies should encourage staff to report suspicious behavior by colleagues and allow only a few employees access to the most sensitive material. Its focus was on securing so-called model weights, the values inside an artificial neural network that get tuned during training to imbue it with useful functionality, such as ChatGPT’s ability to respond to questions.

Under a sweeping executive order on AI signed by President Biden last October, the US National Telecommunications and Information Administration is expected to release a similar report this year analyzing the benefits and downsides to keeping weights under wraps. The order already requires companies that are developing advanced AI models to report to the US Commerce Department on the “physical and cybersecurity measures taken to protect those model weights.” And the US is considering export controls to restrict AI sales to China, Reuters reported last month.