Dario Amodei, co-founder of Anthropic, published a public statement about the company's conversations with the Department of War. Why does this matter to you even if you're not an expert in defense or AI? Because these are decisions about ethical and technical limits that affect how AI systems are used in national security and, by extension, in everyone's life.
What Anthropic said and what the conflict is
Amodei explains that Anthropic has worked actively with the U.S. government: it deployed models on classified networks, in national labs, and has delivered customized models for national security clients. The assistant Claude is being used in intelligence analysis, modeling and simulation, operational planning, and cyber operations.
Still, the company draws two red lines that it has never included in contracts and does not want to accept now:
-
Mass domestic surveillance. Anthropic supports intelligence and counterintelligence abroad, but considers using AI to carry out mass surveillance of the population incompatible with democratic values. The fragmented data sold and circulated today can, with powerful AI, be assembled into a full portrait of a person's life, and that radically changes the threat to civil liberties.
