Several employees of OpenAI and Google, including Google DeepMind chief scientist Jeff Dean, have filed an amicus brief in support of Anthropic in its legal fight against the US government. Amicus briefs are legal filings submitted by parties that are not directly involved in a court case but that have expertise relevant to it. The brief was filed just hours after Anthropic sued the Department of Defense and other federal agencies over Pentagon’s decision to designate the company a “supply-chain risk.”
Signatories of the Amicus brief include Google DeepMind researchers Zhengdong Wang, Alexander Matt Turner, and Noah Siegel, as well as OpenAI researchers Gabriel Wu, Pamela Mishkin, and Roman Novak, among others.
“If allowed to proceed, this effort to punish one of the leading U.S. AI companies will undoubtedly have consequences for the United States’ industrial and scientific competitiveness in the field of artificial intelligence and beyond,” the employees wrote in the brief.
'India Well Positioned To Lead The World In AI': OpenAI CEO Sam Altman At AI Impact Summit
What is the identity of Amici
Amici are engineers, researchers, scientists, and other professionals employed at U.S. frontier artificial intelligence laboratories. We build, train, and study the large-scale AI systems that serve a wide range of users and deployments, including in the consequential domains of national security, law enforcement, and military operations.
We submit this brief not as spokespeople for any single company, but in our individual capacities as professionals with direct knowledge of what these systems can and cannot do, and what is at stake when their deployment outpaces the legal and ethical frameworks designed to govern them.
Poll
Do you support the creation of legal guardrails for AI systems to prevent misuse?
As a group, we are diverse in our politics and philosophies, but we are united in the conviction that today’s frontier AI systems present risks when deployed to enable domestic mass surveillance or the operation of autonomous lethal weapons systems without human oversight, and that those risks require some kind of guardrails, whether via technical safeguards or usage restrictions. We view this conviction not as a result of any particular set of ideological or political commitments, but rather as a conclusion that follows from any reasonable evaluation of the capabilities and limitations of currently available frontier AI systems. It is this conviction that brings us before the Court to respectfully submit this brief, in the hopes that our understanding of the technology at issue, and our unique perspective as employees of companies currently engaged in fierce competition with Anthropic, will shed some light on the stakes of this case.
This case arises from the Pentagon delivering on its threat to designate Anthropic a “supply chain risk” if the company declined to agree to remove limitations on the use of its AI systems for domestic mass surveillance or fully autonomous lethal weapons systems. If it were no longer satisfied with the agreed-upon terms of its contract with Anthropic, the Defendants could have simply canceled the contract and purchased the services of another leading AI company. Instead, Defendants recklessly invoked national security authorities intended to protect the procurement process from interference by foreign adversaries. If allowed to proceed, this effort to punish one of the leading U.S. AI companies will undoubtedly have consequences for the United States’ industrial and scientific competitiveness in the field of artificial intelligence and beyond. And it will chill open deliberation in our field about the risks and benefits of today’s AI systems. Because we understand the risks of frontier AI systems and the need for guardrails, and because we believe that speaking openly about them is of paramount importance, we submit this brief.
We offer three arguments.
First, the government’s designation of Anthropic as a supply chain risk was an improper and arbitrary use of power that has serious ramifications for our industry. While we are not privy to the details of how Anthropic and the Pentagon’s contractual relationship broke down, we are concerned that the Defendants’ action harms public debate on the risks and benefits of AI as well as U.S. competitiveness in the field of AI and innovation more broadly.
Second, the technical concerns animating Anthropic’s “red lines” are legitimate and widely recognized within our scientific community as requiring some kind of response. The best currently available AI systems cannot safely or reliably handle fully autonomous lethal targeting, and should not be available for domestic mass surveillance of the American people. While there are various ways to establish these guardrails, we agree that these guardrails must be in place.
Third, as AI professionals, we understand that the substantive risks of the two use cases at issue are profound. AI-enabled mass domestic surveillance would transform the fragmented data ecosystem that already surrounds American life into a unified, real-time instrument for monitoring the entire population. Even the awareness that such capability exists creates a chilling effect on democratic participation. Autonomous lethal weapons systems, as currently designed and deployed, cannot reliably distinguish combatants from civilians, cannot explain their targeting decisions, and cannot engage in human accountability structures. These concerns require a response.