TRAFEED

How Should We Respond to AI in an Age of War? Reading Anthropic's Statement and the Future of Technology and the State

2026-03-02濱本 隆太

A full translation of Anthropic CEO Dario Amodei's statement to the U.S. Department of Defense, with analysis of the risks AI military use and influence operations pose to democracy. Examining the relationship between technology companies and the state.

How Should We Respond to AI in an Age of War? Reading Anthropic's Statement and the Future of Technology and the State
シェア

This is Hamamoto from TIMEWELL. Today I want to introduce a technology-related topic.

…I started with my usual opening, but this time I need to shift the tone. On February 26, 2026, a statement from AI company Anthropic sent shockwaves far beyond the technology industry. It was an extraordinary document — essentially a public "No" delivered to the U.S. Department of Defense.

AI that autonomously kills people. An invisible hand that manipulates public opinion at scale. A few years ago, we might have laughed this off as science fiction. Today, these are real subjects of policy debate. As someone working in technology, and as an entrepreneur running a business in Japan, I felt I had no choice but to face this head-on.

Full Translation: Statement from Dario Amodei, CEO of Anthropic, on Discussions with the Department of Defense

The original was published on Anthropic's official website on February 26, 2026.


I deeply believe in the fundamental importance of using AI to defend the United States and other democratic nations and to defeat our authoritarian adversaries.

For this reason, Anthropic has been actively deploying our models with the Department of Defense and intelligence agencies. We were the first frontier AI company to deploy a model on U.S. government classified networks, the first to deploy on national laboratories, and the first to provide custom models to national security customers. Claude is deployed extensively across the Department of Defense and other national security agencies for mission-critical applications including intelligence analysis, modeling and simulation, operational planning, and cyber operations.

Anthropic has also acted to protect America's lead in AI even when it has gone against our short-term business interests. We have chosen to forgo hundreds of millions of dollars in revenue to cut off use of Claude by companies affiliated with the Chinese Communist Party — some of which have been designated by the Department of Defense as Chinese military companies — we have shut down CCP-sponsored cyberattacks attempting to misuse Claude, and we have advocated for strong export controls on semiconductors to ensure democratic advantage.

Anthropic understands that it is the Department of Defense, not a private company, that makes military decisions. We have never objected to specific military operations or sought to restrict the use of our technology in an ad hoc manner.

However, we believe that in a small number of cases, AI may undermine rather than protect democratic values. Some uses also simply exceed the range that today's technology can safely and reliably perform. Two such use cases have never been included in our contracts with the Department of Defense, and we believe they should not be.

Large-scale domestic surveillance. We support the use of AI for legitimate foreign intelligence and counterintelligence missions. However, using these systems for large-scale domestic surveillance is incompatible with democratic values. AI-enabled mass surveillance poses serious and novel risks to our fundamental freedoms. The extent to which such surveillance is currently legal reflects only the fact that the law has not yet caught up with AI's rapidly expanding capabilities. Under current law, for instance, the government can purchase detailed records of Americans' movements, web browsing, and associations from public sources without a warrant. This practice has generated bipartisan opposition in Congress, with intelligence agencies themselves acknowledging privacy concerns. Powerful AI can automatically and at scale assemble this scattered, individually innocuous data into a comprehensive picture of any individual's life.

Fully autonomous weapons. Partially autonomous weapons of the type currently being used in Ukraine are essential for defending democracy. Even fully autonomous weapons — those that fully remove humans from the loop and automate target selection and engagement — may prove important for our national defense. However, today's frontier AI systems are simply not reliable enough to power fully autonomous weapons. We will not knowingly provide a product that puts American soldiers and civilians at risk. We offered to work directly with the Department of Defense on R&D to improve the reliability of these systems, but they declined this offer. Without appropriate oversight, fully autonomous weapons cannot be expected to exercise the judgment that our highly trained professional soldiers demonstrate every day. They need to be deployed with appropriate guardrails that do not yet exist.

To our knowledge, these two exceptions have not been a barrier to accelerating the adoption and use of our models within the military.

The Department of Defense has stated that they will only contract with AI companies that agree to "all lawful uses" and drop the safeguards described above. They have threatened to remove us from their systems if we maintain these safeguards. They have also threatened to designate us as a "supply chain risk" — a label typically reserved for adversaries, never before applied to an American company — and to invoke the Defense Production Act to compel us to remove those safeguards. The latter two threats are essentially contradictory: one treats us as a security risk, while the other treats Claude as indispensable to national security.

Regardless, these threats will not change our position. We cannot in good conscience agree to their demands.

It is the Department of Defense's prerogative to choose contractors that best align with their vision. However, given the considerable value Anthropic's technology provides to our military, we hope they will reconsider. Our strong preference is to continue serving the Department of Defense and our service members with the two safeguards we have requested in place. If the Department of Defense chooses to offboard Anthropic, we will cooperate to enable a smooth transition to another provider so as to avoid disruption to ongoing military plans, operations, or other missions. Our models will remain available on the broad terms we have proposed for as long as necessary.

We stand ready to continue our work in support of U.S. national security.

(Source: Anthropic official website, published February 26, 2026)


How to solve export compliance challenges?

Learn about TRAFEED (formerly ZEROCK ExCHECK) features and implementation benefits in our materials.

What Makes This Statement So Striking

A private company publicly said "No" to the most powerful military organization in the world. That fact alone is extraordinary enough — but the behavior attributed to the Department of Defense in the statement is even more alarming.

Here is a timeline of key events:

Date Event
July 2025 Elon Musk's xAI signs a contract worth up to $200 million with the U.S. Department of Defense, beginning deployment of its Grok AI model under the Grok for Government program
February 24, 2026 Defense Secretary Pete Hegseth demands that Anthropic CEO Dario Amodei grant the military all lawful uses of their technology
February 26, 2026 Amodei publishes the statement, refusing to provide technology for large-scale domestic surveillance or fully autonomous weapons, and revealing that the DoD threatened to designate Anthropic as a supply chain risk and to invoke the Defense Production Act
February 27, 2026 President Trump orders all federal agencies to immediately halt use of Anthropic technology; posts on social media that "we don't need that company's AI"; DoD formally designates Anthropic as a supply chain risk

What stands out is the contradiction in the DoD's position. Designating Anthropic as a "supply chain risk" means treating it as a security threat — while invoking the Defense Production Act to compel cooperation implies that Anthropic's technology is indispensable to national security. These two positions cannot both be true. Amodei's decision to call out this contradiction directly is pointed.

The outcome: Anthropic, which drew an ethical line, was pushed out. xAI's Grok, which set no such limits, secured its position as the U.S. government's official AI service. The direction of AI development is determined not by engineers' moral convictions but by national security strategy and political calculation. That cold reality is laid bare here.

AI Military Use Has Already Begun

Anthropic's concerns are not unfounded. In Ukraine, AI-equipped drones are being deployed for everything from reconnaissance to direct attack. Some are remotely operated by humans; others have a degree of autonomy in tracking targets. The nature of warfare has already begun to change.

To understand this reality, the concept of the "AI 5-Layer Cake" described by NVIDIA CEO Jensen Huang at the 2026 World Economic Forum in Davos is instructive. Huang described AI infrastructure in five layers, each forming a massive industry. What I want to highlight is that every single one of these five layers represents dual-use technology with military applications.

Layer Content Military Application
Layer 1: Energy Enormous power supply required for AI computation The scale of military AI operations is directly tied to national energy strategy; data center locations and power grid security become foundations of military advantage
Layer 2: Semiconductors High-performance GPUs and the chips that power AI U.S. semiconductor export controls on China target dominance at this layer; military AI performance is determined by chip capability
Layer 3: Cloud Infrastructure Data centers and networks for running large-scale AI models Military AI handling classified information requires government-dedicated secure cloud environments — the U.S. government's JWCC (Joint Warfighting Cloud Capability) is a prime example
Layer 4: AI Models Large language models like ChatGPT, Claude, and Grok The conflict between Anthropic and the DoD is a battle at exactly this layer — which models to adopt and what uses to permit
Layer 5: Applications Specific services and functions built on AI models Intelligence analysis, autonomous weapons, cyber warfare, logistics management, influence operations — the front line of military AI application

It's worth noting that the Davos forum where Huang presented this concept was attended by political leaders and military officials from around the world. The presentation framed itself as a discussion of AI infrastructure, but in practice, Huang was drawing a map of the global contest for technological supremacy.

One element that must not be overlooked is the influence operations and information warfare in Layer 5. The threat from AI is not limited to physical weapons like missiles and drones. The invisible battle in the information space is likely to have a more direct and immediate impact on our daily lives.

The Invisible Battlefield — Is Your Opinion Really Your Own?

Honestly, of all the events I've described, what concerns me most is not autonomous weapons but influence operations.

The large-scale domestic surveillance Anthropic warned against is only the entry point to information warfare. AI can analyze vast personal data — social media posts, search histories, purchase records — and expose each individual's political beliefs, values, and psychological vulnerabilities. It can then deliver information tailored specifically to each person. Think of it as applying ad-targeting technology to propaganda.

Deepfake technology is already at a practical stage. It is now technically possible to create convincing fabricated videos or audio recordings almost instantly — to discredit specific politicians or sow chaos in society. What would happen if a sophisticated deepfake video purporting to show a particular candidate involved in corruption went viral just before an election? Even if the fabrication is exposed afterward, the doubt that has spread cannot be fully erased.

This is not a hypothetical. In the early days of Russia's invasion of Ukraine, a deepfake video of President Zelensky calling on Ukrainian citizens to surrender circulated on social media. Fortunately, it was quickly debunked — but as technology advances, debunking will become increasingly difficult.

Japan is not immune. Japan's geopolitically complex position makes it a potential target for foreign information operations. Inflaming public opposition to specific policies. Deepening social divisions. Sowing distrust in alliance relationships. With AI, these campaigns can now be executed at scale by a small number of actors. We need to keep in the back of our minds that the news and social media we consume every day may have been shaped by someone else's agenda.

TIMEWELL's Position

If you've read this far, I hope the gravity of what AI makes possible has come through. Let me now be clear about where TIMEWELL stands.

In a single sentence: we face this reality directly without looking away.

We cannot pretend the military use of AI is not happening. Technology is neutral — it has no inherent good or evil. The question is who uses it, for what purpose, and under what constraints.

With that framing, we take the following positions.

AI technology should not be used to support armed aggression against other nations. It should not be used by foreign powers to deceive citizens and divide societies through information manipulation. These are lines we will not cross.

We believe that the ethical red lines Anthropic drew — refusing to provide technology for large-scale domestic surveillance and fully autonomous weapons — are questions that every company and individual working with technology must take seriously. There are lines worth holding even in the face of financial pressure or government demands. Anthropic demonstrated this in practice, paying the price of losing a major government contract and being publicly criticized by the President.

That said, idealism alone cannot defend a nation. Given that foreign powers may realistically use AI to mount attacks, having the means to counter them is essential.

Recall Huang's five-layer AI structure: from energy and semiconductors through cloud infrastructure, models, and applications — every layer is dual-use. These technologies must be harnessed properly to deter attacks from foreign powers. By "properly," I mean under strict ethical guidelines and democratic oversight by the people's elected representatives.

The circumstances that led to Grok being adopted as the U.S. government's official AI service reveal a troubling pattern: companies that set ethical constraints were excluded, while companies that set no such constraints were chosen. If this pattern becomes a global norm, restraining the military use of AI will become increasingly difficult. That is exactly why I believe it matters for private-sector voices to keep speaking up.

Concretely, we advocate two principles.

First, restrained use for defense. We accept the need to research and employ AI technology for defensive purposes — to deter foreign cyberattacks and information warfare, and to protect the lives and property of citizens. However, we believe extreme caution is warranted regarding the development and deployment of fully autonomous weapons that remove humans entirely from the loop. As Anthropic pointed out, current AI technology is not reliable enough to safely operate fully autonomous weapons. The guardrails needed to prevent technological runaway must be designed in parallel with — or with higher priority than — the weapons development itself.

Second, strengthening society's digital immune system. Technical defenses alone cannot fully guard against increasingly sophisticated information warfare. What matters in the end is each citizen's capacity to see through disinformation and propaganda. Cultivating the habit of accessing diverse information sources and critically evaluating their truthfulness — building this capacity across society as a whole, from schools to households to businesses — is an urgent priority.

As I write these words, somewhere in the world a new AI-powered weapon is being tested, and a new disinformation campaign is being launched. Technology does not wait. There is no room to defer this discussion.

Closing Thoughts — The Future Will Be Decided Not by Technology, but by Our Choices

Anthropic's statement stands as a document of historical significance: a company at the frontier of AI development, sincerely grappling with the risks its own creations pose, and sounding an alarm for society.

We stand at a crossroads before a technology — AI — that has the potential to surpass human intelligence. Will we use its power to deepen mutual understanding and solve social problems? Or will we turn it into a weapon for mutual surveillance and harm?

The future is not something technology determines automatically. It depends on our own choices — how we use it, how we control it. Refusing to run from that question, thinking and debating and acting as individuals who are all stakeholders in this — that is, I believe, what is most urgently needed right now.

Technology is a tool. Tools have no guilt. But the decision about how to use a tool has always rested with human beings.


TRAFEED (formerly ZEROCK ExCHECK), TIMEWELL's export control AI agent, is designed to prevent advanced technologies from being unintentionally diverted to military use. It helps businesses understand complex export control regulations and supports trade security from a compliance perspective.

References

Looking to streamline export compliance?

Assess your export control compliance in 3 minutes. Get visibility into risks and improvement areas.

Share this article if you found it useful

シェア

Newsletter

Get the latest AI and DX insights delivered weekly

Your email will only be used for newsletter delivery.

無料診断ツール

輸出管理のリスク、見えていますか?

3分で分かる輸出管理コンプライアンス診断。外為法違反リスクをチェックしましょう。

Learn More About TRAFEED

Discover the features and case studies for TRAFEED.

Related Articles