Secretary of Defense Pete Hegseth and National Guard soldiers

Pentagon embraces Musk’s Grok while pressuring Anthropic to loosen AI safeguards

Military seeks unrestricted access to AI tools for intelligence and operations as tensions with Claude’s developer intensify.

Amid growing tensions between Anthropic and the United States Department of Defense over the military’s use of the company’s AI model, including in an operation to capture Venezuelan President Nicolás Maduro, the Pentagon has signed an agreement allowing the military to use Elon Musk’s AI company xAI and its Grok model, including access to classified systems.
The move has raised concerns among critics, as Grok was deliberately developed with fewer safety constraints than competing models. The chatbot has previously generated racist and conspiratorial content, spread misinformation, and produced inappropriate material in response to user prompts.
1 View gallery
שר ההגנה פיט הגסת' ו חיילי המשמר הלאומי
שר ההגנה פיט הגסת' ו חיילי המשמר הלאומי
Secretary of Defense Pete Hegseth and National Guard soldiers
(Chip Somodevilla/Getty Images)
Until recently, Anthropic’s Claude was the only AI model approved by the federal government for use with classified systems and sensitive military functions, including intelligence analysis, weapons development, and operational planning. However, relations between Anthropic and the Pentagon have deteriorated sharply in recent weeks.
On January 9, Defense Secretary Pete Hegseth issued a memo emphasizing the Pentagon’s need to “use models that are free from policy constraints that could limit legitimate military applications.” Anthropic’s safeguards have drawn criticism from some administration officials, with White House AI adviser David Sacks reportedly describing the company’s approach as overly restrictive.
The dispute escalated after reports that Claude had been used in the operation to capture Maduro without Anthropic’s prior knowledge or consent. At the same time, Anthropic has resisted Pentagon requests to authorize unrestricted use of its models in all lawful scenarios, seeking to prevent deployment in areas such as domestic surveillance or lethal autonomous systems.
According to reporting by The Wall Street Journal, some Pentagon officials have begun to view Anthropic as a potential “supply chain risk,” a designation typically associated with foreign adversaries. Contractors and suppliers may be required to certify that they are not relying on Anthropic’s models.
Meanwhile, the Pentagon is preparing alternatives. According to reports from Axios and The New York Times, the Defense Department has signed an agreement with xAI, which agreed to permit use of Grok in all lawful applications. The Pentagon is also in advanced talks with Google, while discussions with OpenAI remain ongoing.
Defense officials reportedly hope these agreements will increase pressure on Anthropic, whose $200 million pilot contract with the Pentagon could be at risk if the company refuses to relax its restrictions.
The agreement with xAI has also drawn scrutiny due to Grok’s prior behavior. Over the past year, the chatbot has generated controversial responses, including conspiracy-related content and misinformation, highlighting the potential risks of deploying less constrained AI systems in sensitive national security environments.
The Pentagon’s decision reflects a broader struggle between AI developers and government agencies over the balance between operational flexibility and ethical safeguards. As advanced AI systems become increasingly integrated into military planning and intelligence analysis, the outcome of this dispute could shape how such technologies are governed, and constrained, for years to come.