OpenAI is amending its agreement with the U.S. Department of Defense to clarify limits on intelligence use of its AI tools, CEO Sam Altman said.
![]() |
| OpenAI clarifies its Defense Department partnership, stating that intelligence agencies such as the NSA cannot use its services without additional contractual approval. Image: CH |
Washington, United States — March 3, 2026:
OpenAI is revising parts of its agreement with the United States Department of Defense, as Chief Executive Sam Altman signaled new contractual safeguards governing how the company’s artificial intelligence tools may be used inside the U.S. military establishment.
In a post published Monday on X, Altman said the company has been working with the Pentagon — jokingly referring to it as the “Department of War” — to “make some additions in our agreement to make our principles very clear.”
At the center of the amendment is a clarification that OpenAI’s services will not be used by Defense Department intelligence agencies, such as the National Security Agency, without a separate and explicit modification to the contract.
According to Altman, any provision of OpenAI’s tools to intelligence entities would require a follow-on amendment to the existing agreement. The added language effectively creates a legal and procedural checkpoint before the company’s AI systems could be integrated into intelligence operations.
The clarification comes amid heightened scrutiny over how generative AI platforms may be used in classified, surveillance, or military contexts. Civil liberties advocates and policymakers have increasingly questioned the boundaries between commercial AI providers and national security agencies.
By spelling out the conditions under which intelligence agencies could access its tools, OpenAI appears to be attempting to balance commercial expansion with reputational and ethical considerations.
The amendment follows last week’s announcement that OpenAI had secured a deal to deploy its technology within the Defense Department’s classified network. While specific operational details have not been disclosed, the move marks a significant step in embedding generative AI capabilities within secure government systems.
Defense officials have increasingly explored AI applications for logistics planning, cybersecurity threat detection, intelligence analysis, and operational support. The Pentagon’s push to modernize digital infrastructure has accelerated partnerships with private technology firms, particularly in areas deemed strategically critical.
For OpenAI, the Pentagon agreement underscores a broader shift toward deeper engagement with government clients. At the same time, the company has sought to publicly reaffirm its commitment to usage policies and responsible AI principles.
The contractual amendment suggests that OpenAI is drawing a distinction between general Defense Department usage and more sensitive intelligence activities. That distinction could serve as a precedent for future AI-government agreements, particularly as other countries evaluate how to incorporate advanced AI systems into military frameworks.
As artificial intelligence becomes increasingly central to geopolitical competition, the OpenAI–Pentagon partnership highlights a defining question for the industry: how far should commercial AI firms go in supporting national security operations — and under what constraints?
For now, OpenAI’s message is clear: expansion is on the table, but only with explicit guardrails.
