OpenAI announced plans to amend its contract with the Pentagon in response to public backlash regarding the potential use of its artificial intelligence technologies for surveillance and military purposes. The company’s CEO, Sam Altman, shared an internal memo on social media addressing these concerns and outlining the steps being taken to clarify the contract’s terms.
On Monday, Altman stated that OpenAI is collaborating with Pentagon officials to include explicit language in the agreement ensuring the AI systems will not be employed intentionally for domestic surveillance of U.S. persons and nationals. The updated contract will emphasize compliance with relevant laws, including the Fourth Amendment to the U.S. Constitution, the National Security Act of 1947, and the Foreign Intelligence Surveillance Act (FISA) of 1978.
Altman also highlighted that the Department of Defense confirmed OpenAI’s services would not be used by military intelligence agencies such as the National Security Agency (NSA). Any future provision to extend services to such agencies would require a separate contract modification.
The announcement follows OpenAI’s recent agreement to deploy its AI models on classified military networks, a deal reached just before escalating tensions involving the United States and Iran. Altman acknowledged that the company may have moved too quickly in finalizing the contract, saying, “I got things wrong” and that the complexity of the issues necessitates clearer communication. He expressed that although their intention was to prevent a worse outcome, the rapid decision appeared “opportunistic and sloppy”.
The OpenAI contract came amid a broader debate surrounding the military use of AI. Hours before the deal was publicized, former President Donald Trump directed federal agencies to discontinue using Anthropic’s AI system, Claude, after negotiations faltered. Anthropic had insisted on explicit contractual prohibitions against mass domestic surveillance and fully autonomous weapons—systems capable of lethal action without human oversight.
The Pentagon’s arrangement with OpenAI sparked criticism and protests over fears that OpenAI’s technologies might be used for domestic monitoring or autonomous lethal capabilities, claims which Altman has denied. Demonstrations took place outside OpenAI offices in San Francisco and London, and advocacy groups, including QuitGPT, initiated boycotts and planned additional protests.
Nearly 500 employees from both OpenAI and Google expressed support for Anthropic’s stance through an open letter. The company Anthropic has not provided a public response to recent developments. OpenAI’s revised contract aims to address the controversies by clarifying the legal and ethical boundaries governing the deployment of its AI tools with U.S. defense agencies.








