Microsoft filed a brief Tuesday urging a federal judge to temporarily block the Pentagon’s designation of Anthropic as a supply chain risk, marking a rare direct challenge to the federal government from one of America’s largest defense contractors.
The government’s designation imposes “substantial and wide-ranging costs and risks” on companies that use Anthropic’s models “as a foundational layer of their own products and services, which they provide to the U.S. military,” Microsoft said in the filing. The New York Times DealBook called Microsoft’s brief “a remarkable act” and “a momentous decision” for a company that is one of the largest government contractors in America, noting it stands out in a period when corporate America’s unwritten rule has been to avoid picking fights with the White House.
The brief came a day after Microsoft launched Copilot Cowork, a new AI product built on Anthropic’s Claude models, and four months after Microsoft committed to invest up to $5 billion in the startup in a deal that includes Anthropic spending at least $30 billion on Microsoft Azure. Amazon, which has invested $8 billion in Anthropic, has not publicly weighed in on the lawsuit or the supply chain risk designation.

Microsoft hasn’t shied away from fighting with Washington, D.C., at key moments in its history, ranging from its landmark antitrust battle with the Justice Department in the late 1990s to its Supreme Court fight against the Trump administration over DACA immigration protections. The Redmond-based company has built one of the deepest government-relations operations in tech, led by President and Vice Chair Brad Smith, a former D.C. lawyer.
Anthropic sued the Department of War on Monday over the designation, which is historically reserved for foreign adversaries. It followed the collapse of contract negotiations in which Anthropic refused to drop two guardrails on its AI models: no use for fully autonomous weapons and no use for mass domestic surveillance of Americans. President Trump separately directed all federal agencies to stop using Anthropic’s technology.
OpenAI moved quickly to fill the gap left by Anthropic, announcing its own Pentagon deal on the same day the designation came down. CEO Sam Altman later acknowledged the timing looked “opportunistic and sloppy.” Thirty-seven engineers and researchers from OpenAI and Google, including Google chief scientist Jeff Dean, separately filed their own amicus brief in support of Anthropic. In its brief, Microsoft said AI should not be used “to conduct domestic mass surveillance or put the country in a position where autonomous machines could independently start a war,” aligning itself with Anthropic’s position. Microsoft also flagged a double standard: the Pentagon gave itself six months to transition off Anthropic’s models but made the designation effective immediately for contractors. Without a restraining order, Microsoft warned, it and other companies would have to “act immediately to alter existing product and contract configurations” for the military.



