Home » Microsoft’s Amicus Brief Signals That Big Tech Will Fight the Pentagon Over AI Ethics Standards

Microsoft’s Amicus Brief Signals That Big Tech Will Fight the Pentagon Over AI Ethics Standards

by admin477351

Microsoft’s decision to file an amicus brief in support of Anthropic in its clash with the Pentagon is being read as a signal that major technology companies are prepared to fight the government over the ethical standards that should govern AI in national security contexts. The brief, filed in a San Francisco federal court, argued that a temporary restraining order was needed to prevent the immediate harm caused by the Pentagon’s supply-chain risk designation. The filing was joined by a separate brief from Amazon, Google, Apple, and OpenAI, forming a wall of industry opposition to the Defense Department’s action.
The Pentagon’s designation was triggered by the breakdown of a $200 million contract to deploy Anthropic’s AI on classified military systems. Anthropic had refused to sign the contract without protections against using its technology for mass surveillance of US citizens or to support autonomous lethal weapons. Defense Secretary Pete Hegseth responded by branding the company a supply-chain risk, and the Pentagon’s technology chief later publicly declared that renegotiation was off the table.
Microsoft’s position as a deeply embedded Pentagon contractor, including a share of the $9 billion Joint Warfighting Cloud Capability contract, gives its legal intervention substantial credibility and weight. The company also uses Anthropic’s AI in systems it builds for the military, making it a directly affected party rather than simply a sympathetic bystander. Microsoft publicly called for a collaborative framework in which government and the technology sector define together how AI should and should not be used in national security operations.
Anthropic’s lawsuits, filed simultaneously in California and Washington DC, argued that the supply-chain risk label, originally designed to target firms with ties to foreign adversaries, was being weaponized against a domestic American company for holding and expressing views on AI safety. The company’s filings disclosed that it does not currently believe Claude can reliably and safely support lethal autonomous decision-making, which it said was the legitimate technical basis for its contract demands. Anthropic also argued the designation violated its First Amendment rights.
House Democrats are pressing the Pentagon on a related but distinct issue: whether AI was used in a US military strike in Iran that reportedly killed more than 175 civilians at an elementary school. Lawmakers have asked whether human oversight was maintained and whether AI targeting tools like Maven Smart System were involved. The convergence of these issues is creating an unprecedented moment of scrutiny for the role of artificial intelligence in American military operations.

You may also like