President Biden is expected to sign his first national security memorandum on Thursday detailing how the Pentagon and intelligence agencies should use and protect artificial intelligence technology, including decisions on nuclear weapons and who should be granted asylum. There will be “guardrails” around how artificial intelligence technology is used in making decisions.
The new document is the latest in a series released by Biden that aims to improve government operations, from detecting cyberattacks to predicting extreme weather, while limiting the most dystopian possibilities, such as developing autonomous weapons. We are tackling the challenge of leveraging AI tools to speed things up. .
But most of the deadlines in the order for agencies to conduct reviews of the application and regulations of the tools will expire after Biden leaves office. Most national security memorandums have been adopted or amended by successive presidents, but it is far from clear how former President Donald J. Trump would approach the issue if elected next month.
The new directive is scheduled to be announced Thursday at the National War College by National Security Advisor Jake Sullivan, who will consider what uses and threats the new tools could pose to the United States. It encouraged many initiatives. In prepared remarks for the event, he said one of the challenges is that the U.S. government does not fund or own much of the key AI technologies, and that those technologies remain unregulated. He acknowledged that it is evolving so rapidly that it can be ignored.
“Our government played an early and critical role in shaping developments from nuclear physics and space exploration to personal computing and the Internet,” Sullivan will say. “That wasn’t the case for most of the AI revolution. While the Department of Defense and other agencies funded much of the AI research in the 20th century, many of the advances in the past decade have been driven by the private sector. I’m here.”
But Biden aides are concerned about what uses are legal for companies, and that there are no guidelines from the Pentagon, the CIA, or even the Justice Department on how AI can be used. He said it was hindering development.
The new memorandum contains approximately 50 pages in an unclassified version, with a classified appendix. Some of its conclusions are obvious. For example, having an AI system decide when to launch a nuclear weapon is prohibited. That is up to the president as commander in chief.
While it’s clear that no one wants the fate of millions of people to depend on the choices of an algorithm, this clear statement calls for deeper discussions about the limits that need to be placed on high-risk applications of artificial intelligence. This is part of an effort to lure China into China. . Initial dialogue with China on this topic took place in Europe this spring, but no substantive progress was made.
“This focuses on the question of how these tools influence the government’s most important decisions,” said a professor at Stanford University who has long studied the intersection of artificial intelligence and nuclear decision-making. says scholar Herb Lin.
“Obviously, no one is going to provide the nuclear codes to Chat GPT,” Dr. Lin said. “But questions remain about how much of the information the president is getting is being processed and filtered through AI systems, and whether that’s a bad thing.”
However, the rules regarding non-nuclear weapons are even more vague. They allow human decision-makers to be “in the loop” on targeting decisions, or to monitor AI tools that could potentially target a weapon without reducing the weapon’s effectiveness. I’m looking for. That will likely be especially difficult if Russia and China begin to make greater use of fully autonomous weapons that remove humans from battlefield decisions and operate at breakneck speeds.
Similarly, the president’s new AI “guardrails” would prohibit artificial intelligence tools from making asylum granting decisions. It would also be prohibited to track someone based on ethnicity or religion, or to classify someone as a “known terrorist” without human judgment.
Perhaps the most interesting part of the order is that it treats private sector advances in artificial intelligence, like early nuclear weapons, as national assets that need to be protected from espionage and theft by foreign adversaries. The order requires intelligence agencies to begin protecting the development of large-scale language models and the chips used to develop them as national treasures, and provides private developers with up-to-date information to protect their inventions. I’m looking for.
This will allow the AI Safety Institute, a new and little-known organization within the US National Institute of Standards and Technology, to inspect AI tools before they are released to help terrorist groups build biological weapons. , will be able to confirm that it does not support the development of biological weapons. Adversaries like North Korea are improving the accuracy of their missiles.
And just as the United States sought to attract nuclear and military scientists after World War II without risking working for rivals like Russia, it will attract the best AI expertise from around the world. It details the efforts to call home to the United States.