"China has signaled interest in joining discussions on setting rules and norms for AI, and we should welcome that," said Bonnie Glaser of the German Marshall Fund to the Breaking Defense site. "The White House is interested in engaging China on limiting the role of AI in command and control of nuclear weapons."
"Nobody wants to see AI controlled nuclear weapons, right?" asked Joe Wang, a former State Department and NSC staffer now at the Arlington, Virginia-based Special Competitive Studies Project, which specializes in AI and emerging technologies. "Like, even the craziest dictator can probably agree."
Call me crazy, but, no, America should not want to enter into any AI agreement with the People's Republic of China on "nuclear C2" — command and control — or any other matter.
Yes, "AI is a civilization-altering technology," as technology analyst Brandon Weichert told Gatestone, and, no, no one should want machines to control the launch of nuclear weapons. Many remember WarGames, the 1983 movie starring Matthew Broderick, in which an American military computer, on its own, simulated an all-out Soviet attack and almost launched a U.S. counterstrike on the Soviet Union.
Life imitates art. In the first few hours of September 26, 1983, one Lt. Col. Stanislav Petrov happened to be the duty officer at the Serpukhov-15 early-warning center south of Moscow. Successive alarms indicated that America had launched five Minuteman missiles from Montana toward Mother Russia. More than thirty reliability checks in Serpukhov-15 confirmed the attack was indeed taking place. Soviet procedures required a retaliatory launch.
Petrov, however, trusted his intuition and ignored the warnings. "I was drenched in sweat," the Soviet officer recalled. "People were shouting, the siren was blaring. But a feeling inside told me something was wrong."
Something was. As it turned out, sensors aboard the Kosmos 1382 satellite misinterpreted sunlight bouncing off the tops of clouds as incoming missiles.
A human's instinct — what Petrov later called "a funny feeling in my gut" — saved a good portion of humanity from incineration that day. An AI-controlled system in this situation would have launched what it thought was a counterstrike on the American homeland but in reality would have been a first strike. As advanced as AI technology is, it is not possible to include "gut feelings" in algorithms.
Nonetheless, just because something is absolutely necessary does not mean there should be agreement to accomplish it.
An agreement requiring a human to make launch decisions would, as a practical matter, be unenforceable. None of China, Russia, or the United States would allow others to pore over millions of lines of their computer code and permit foreigners to remain at their facilities to vet updates, for instance.
America does not need another feel-good agreement with China. It already has them, especially the Biological Weapons Convention, which has no enforcement mechanisms. China's solemn obligations in that pact did not prevent the regime from maintaining a string of biological weapons facilities, including the Wuhan Institute of Virology, and deliberately spreading COVID-19 beyond its borders.
Hope, as they say, is now triumphing over experience. "We're going to get our experts together to discuss risk and safety issues associated with artificial intelligence," US President Joe Biden said on November 16, a day after his summit with Chinese President Xi Jinping in San Francisco. Control of AI was one of the three areas he said the Chinese leader agreed to further discuss.
Whatever China wants is almost certainly not in the interest of either the United States or the international community. The risk is that, in another unenforceable agreement, the United States will forego employing critical advantages that AI affords in targeting conventional munitions.
"Because autonomous systems will soon dominate warfare, AI rules will be the 21st century equivalent of agreements controlling nuclear weapons," Hamlet Yousef, managing director of IronGate Capital Advisors, an investor in dual-use defense technologies, said to Gatestone.
Whether we like it or not, the world is witnessing the "Rise of the Machines." The prospect of killer robots is chilling — even for those who have not seen any of the Terminator movies— but such horrible devices, like nuclear weapons, cannot be legislated away. Many may feel that it is unfortunate that humanity has made ghastly creations possible, but agreements with inherently untrustworthy regimes, such as China's, will not remedy the situation.
The Chinese regime wants to talk about artificial intelligence largely because it is trailing the U.S. and thinks an agreement would help it catch up. Weichert, also author of Biohacked: China's Race to Control Life, points out that an AI agreement with Beijing would pave the way for China to access the U.S. technology it does not already have. "China doesn't do anything for free—they're going to want America to ease restrictions on access to advanced semiconductors," Yousef states. As Weichert says, "The U.S. retains a technical lead in chip design over China, giving it decisive advantages in the near term."
So, Bonnie Glaser, the world should not "welcome" China's willingness to talk to America about artificial intelligence. Washington elites may not be able to see this, but China's gambit, in common parlance, is called a "trap."
Gordon G. Chang is the author of The Coming Collapse of China, a Gatestone Institute distinguished senior fellow, and a member of its Advisory Board.