Skip to content

Idea: Anthropic vs DoW #45

@natashaannn

Description

@natashaannn

Below is a copy paste from my newsletter from the Future of Life institute:

The Big Three
Key updates this month to help you stay informed, connected, and ready to take action.
→ Anthropic vs. DoW: The U.S. Department of War gave AI company Anthropic an ultimatum last week: allow the military unrestricted access to their AI, or lose a $200M contract with the Pentagon and be blacklisted from all government work. Anthropic stood firm that AI shouldn't control weapons or be used for mass surveillance of Americans, which the Pentagon wouldn't concede to.
Vocal support for Anthropic's commitment to their (bare minimum) redlines emerged from across the U.S., including from lawmakers across the political spectrum and employees at rivals such as OpenAI and Google.
After the Friday deadline imposed by Secretary of War Pete Hegseth came and went, Anthropic’s work with the government was suspended and they were slapped with a national security “supply chain risk” label from the White House - usually reserved for foreign adversaries - which could critically disrupt Anthropic’s other business partnerships. Hegseth also threatened that the government could invoke the Defense Production Act to force Anthropic to serve them their systems tailored “to the military's needs", though it hasn’t yet materialized.
Just hours later on Friday, OpenAI announced they had struck a deal with the Pentagon allowing them to use their models across their classified network. While OpenAI CEO Sam Altman had expressed support for Anthropic’s redlines and claims that OpenAI shares them, it’s still unclear on how - if at all - the Pentagon will respect them when using OpenAI’s models. The move has been met with further skepticism from the public, with a call for ChatGPT users to cancel their subscriptions spreading across social media.
→ Anthropic drops safety pledge: The same week as their showdown with Hegseth, news broke that Anthropic is dropping a central pillar of their Responsible Scaling Policy (RSP), in which they pledge to never train an AI system unless they can guarantee in advance that their safety measures are adequate. While they insist they're not abandoning safety, they’re replacing firm pre-deployment guarantees with looser commitments to transparency, risk reports, and “Frontier Safety Roadmaps,” essentially switching from an offensive to defensive strategy. Not a reassuring move from the AI company that’s built their brand on safety.
→ AI goes nuclear: Especially relevant given the Anthropic-DoW battle, King’s College London ran war game simulations with AI systems from Anthropic, OpenAI, and Google, and found that in 95% of scenarios they chose to deploy nuclear weapons. The LLMs frequently escalated conflicts to nuclear strikes, showing little hesitation even after being reminded of the catastrophic human consequences. None of them chose full de-escalation or surrender - in fact, when facing defeat they tended to escalate instead.

Nat's Notes

  • While openai ended up securing the contract, it is still miles behind anthropic. claude code still beats codex by miles.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions