Earlier this week, Secretary of Defense Pete Hegseth sat down with Dario Amodei, the CEO of the leading AI firm Anthropic, for a conversation about ethics. The Pentagon had been using the company's flagship product, Claude, for months as part of a $200 million contract'the AI had even reportedly played a role in the January mission to capture Venezuelan President Nicolas Maduro'but Hegseth wasn't satisfied. There were certain things Claude just wouldn't do. That's because Anthropic had instilled in it certain restrictions. The Pentagon's version of Claude could not be used to facilitate the mass surveillance of Americans, nor could it be used in fully autonomous weaponry'situations where computers, rather than humans, make the final decision about whom to kill. According to a source familiar with this week's meeting, Hegseth made clear that if Anthropic did not eliminate those two guardrails by Friday afternoon, two things could happen: The Department of Defense could use the...
learn more