‘No domestic mass surveillance’: What’s inside OpenAI’s deal with Trump admin amid Anthropic tussle

OpenAI Strikes Classified Deployment Deal With U.S. “Department of War,” Urges Industry-Wide Safety Terms Amid Anthropic-Trump Tensions

L to R: OpenAI CEO Sam Altman, US President Donald Trump and Anthropic CEO Dario Amodei (AP and AFP)

Artificial intelligence company OpenAI’s chief Sam Altman on Friday said prohibition on domestic mass surveillance – a condition Anthropic kept before the Donald Trump administration and sparked a feud – is one of the most important safety principles of its deal reached with the US Department of War.

In a major announcement, Sam Altman said on X that OpenAI has reached an agreement with the US Department of War (DoW) to deploy its AI models inside the department’s classified network.

“Tonight, we reached an agreement with the Department of War to deploy our models in their classified network,” Altman wrote.

The deal comes after President Donald Trump told the US government Friday to “immediately” stop using Anthropic’s technology after the AI startup rejected the Pentagon’s demand that it agree to unconditional military use of its Claude models.

Anthropic vowed to sue over the “intimidation” and insists its technology should not be used for the mass surveillance of US citizens or deployed in fully autonomous weapons systems.

What’s inside the OpenAI-DoW deal?

Safety principles in the agreement

Altman emphasised that the arrangement comprises OpenAI’s core safety doctrines directly into the terms of deployment. “AI safety and wide distribution of benefits are the core of our mission,” he said, adding that the DoW agrees with two of its main principles, including prohibition to domestic mass surveillance.

“Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement,” Altman said on X

Altman further noted that “In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome.”

Technical safeguards and controlled deployment

Beyond policy alignment, Sam Altman said OpenAI will implement additional technical controls.

“We also will build technical safeguards to ensure our models behave as they should, which the DoW also wanted,” Altman stated. “We will deploy FDEs to help with our models and to ensure their safety, we will deploy on cloud networks only.”

Call for equal terms across AI companies

Altman said OpenAI is urging the Department of War to standardise the same contractual conditions across the AI industry.

“We are asking the DoW to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept,” he said.

While Altman did not explicitly reference Anthropic or Trump in the X post, he appeared to address the broader scene of escalating disputes: “We have expressed our strong desire to see things de-escalate away from legal and governmental actions and towards reasonable agreements.”

Closing his message, Altman reaffirmed OpenAI’s stated mission: “We remain committed to serve all of humanity as best we can. The world is a complicated, messy, and sometimes dangerous place.”

Meanwhile, Anthropic’s conditions were countered by the Pentagon which said that it operates within the law and contracted suppliers cannot set terms on how their products are employed.

Slamming Anthropic as a “woke company”, Trump wrote on Truth Social: “I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic’s technology. We don’t need it, we don’t want it, and will not do business with them again!”

“Anthropic better get their act together, and be helpful during this phase out period, or I will use the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow,” Trump added.

Leave a Comment