US-based AI startup Anthropic has begun hiring a chemical weapons and explosives expert after it walked away from a Pentagon deal over concerns about unrestricted AI use. The move has sparked confusion on social media, but the company has clarified its position. The Dario Amodei-led firm continues to oppose the use of AI in weapons and instead plans to build a clear and robust policy governing how AI systems handle sensitive chemical and explosives-related information.
According to the job listing, the policy manager will focus on how AI models process high-risk data and will work closely with AI safety researchers to prevent “catastrophic misuse.” The company has emphasized that it is not hiring to develop weapons but to establish safeguards and policies that prevent misuse.
AI Weapons Debate Deepens After Pentagon Dispute
Anthropic has intensified the global debate on AI use in military and autonomous weapons after its fallout with the US Department of Defense. While the US military has denied using AI for such purposes, the company remains unconvinced.
Reports indicate that Claude will be removed from classified US military networks within the next six months, and OpenAI will replace it with its own models. However, Claude still supports certain Pentagon operations and reportedly operates within Palantir’s Maven system for target selection and related activities.
Also Read: Air Canada Plane Collision LaGuardia Airport: 4 Injured
Key Responsibilities of the Role
The policy manager will:
- Monitor emerging threats linked to AI-enabled weapons misuse
- Develop evaluation methods to assess AI capabilities related to chemical weapons and explosives
- Design strategies to reduce misuse risks
- Establish safeguards for handling sensitive AI outputs
Salary and Eligibility Criteria
Anthropic is offering an annual salary between $245,000 and $280,000 (approximately ₹2.30 crore to ₹2.68 crore). Candidates must hold a PhD in Chemistry, Chemical Engineering, or a related field and must have 5 to 8 years of experience in chemical weapons or explosives defense.
OpenAI Also Expands Risk Management Hiring
OpenAI, which has secured the Pentagon contract, is actively hiring experts to manage biological and chemical risks linked to AI systems. Anthropic has filed a lawsuit against the US Department of Defense after the agency labeled it a “supply chain risk.” The company claims that the decision could cost it millions of dollars in lost revenue.
Also Read: Iran Oil Hub Faces Growing Threat as Global Tensions Continue to Escalate

