Anthropic’s refusal to loosen AI safety standards sparks a high-stakes standoff with the U.S. Defense Department
Washington, D.C., 25 February 2026 – A major debate is unfolding in Washington over how artificial intelligence should be used by the military. At the center of the issue is Anthropic, a leading AI company that has drawn a firm line on how far its technology can go.
During a recent meeting in Washington, U.S. Defense Secretary Pete Hegseth reportedly warned Anthropic that the company could lose its access to military contracts if it does not relax its AI safety policies. According to people familiar with the discussion, the warning followed Anthropic’s refusal to allow its AI systems to be used for domestic mass surveillance or fully autonomous weapons.
Anthropic’s leadership has consistently said that these uses cross ethical boundaries. The company believes that AI-controlled weapons and large-scale surveillance systems could be misused and cause serious harm if not tightly controlled. Its CEO reiterated that stance in the meeting, emphasizing that some applications of AI remain unsafe, even if they are technically legal.
The Defense Department, however, argues that AI tools should be available for all lawful purposes related to national security. Officials have indicated that the military plans to continue using advanced AI systems and may take steps to ensure access, even if a company objects.
One option reportedly under consideration is the use of the Defense Production Act, a decades-old law that allows the government to require companies to support national security needs during emergencies. If applied, this could force an AI provider to make its technology available to the military.
There is also talk of labeling Anthropic as a supply chain risk, a move that would effectively block the company from future defense work. Such a step would place Anthropic on a government blacklist, limiting its ability to work with the U.S. military.
The disagreement has drawn attention across the tech industry. Other major AI companies have already agreed to let their tools be used in any lawful military scenario, including classified environments. Anthropic stands out for maintaining stricter limits, even as competition in the AI sector intensifies.
This standoff comes at a sensitive time for the company, which is preparing for a public listing. Investors are closely watching how the situation develops, as government contracts and regulatory pressure can significantly affect an AI company’s growth and valuation.
Despite the tension, Anthropic’s leadership says the company’s business has continued to grow. The firm argues that long-term trust in artificial intelligence depends on strong safety standards and careful deployment, especially in military and surveillance contexts.
The broader debate highlights a key question facing the future of AI: how to balance national security needs with ethical safeguards. As AI becomes more powerful and widespread, decisions made now could shape how the technology is used for decades to come.

