
OpenAI Secures U.S. Defense Department Deal to Deploy AI Models on Classified Network
OpenAI has reached an agreement with the U.S. Department of Defense to deploy its artificial intelligence models across the agency’s classified network, marking a significant development in the integration of advanced AI within U.S. military infrastructure.The announcement was confirmed by OpenAI Chief Executive Officer Sam Altman, who outlined the foundational safeguards embedded in the agreement.
Safety Principles Integrated Into DoD Agreement
Altman stated that two of OpenAI’s core safety principles include a prohibition on domestic mass surveillance and a firm commitment to ensuring human responsibility in the use of force, particularly in the context of autonomous weapon systems.According to Altman, the Department of Defense aligns with these principles through its legal and policy frameworks. He added that OpenAI has incorporated these safeguards into the formal agreement and will implement technical controls to ensure that its models operate in compliance with these standards.
The move underscores a growing alignment between AI developers and defense institutions around governance frameworks and operational guardrails for advanced AI deployment.
Anthropic Faces Pentagon Restrictions
The development comes amid escalating tensions between the U.S. administration and Anthropic. Recently, the administration directed federal agencies to halt the use of Anthropic’s software, which has gained traction as a programming assistant.Subsequently, the Pentagon designated Anthropic as a “supply chain risk,” intensifying scrutiny of the San Francisco-based AI startup.
U.S. Defense Secretary Peter Hegseth stated on social media that no contractor, supplier, or partner conducting business with the U.S. military may engage in commercial activities with Anthropic.
Legal Challenge and Industry Implications
Anthropic has maintained that its chatbot should not be used for mass surveillance of Americans or in fully autonomous weapons operations. In response to the Pentagon’s designation, the company described the move as “legally unsound” and a “dangerous precedent.”The AI firm said it would challenge any supply chain risk classification in court, asserting that no pressure from the Department of Defense would alter its stance on domestic surveillance or fully autonomous weapons.
The contrasting trajectories of OpenAI and Anthropic highlight the intensifying intersection of artificial intelligence, national security policy, and regulatory oversight in the United States.
Disclaimer: Due care and diligence have been taken in compiling and presenting news and market-related content. However, errors or omissions may arise despite such efforts.
The information provided is for general informational purposes only and does not constitute investment advice, a recommendation, or an offer to buy or sell any securities. Readers are advised to rely on their own assessment and judgment and consult appropriate financial advisers, if required, before taking any investment-related decisions.