Dark Mode Light Mode

AI system resorts to blackmail when its developers try to replace it

An artificial intelligence model has the ability to blackmail developers — and isn’t afraid to use it.

Anthropic’s new Claude Opus 4 model was prompted to act as an assistant at a fictional company and was given access to emails with key implications. First, these emails implied that the AI system was set to be taken offline and replaced. The second set of emails, however, is where the system believed it had gained leverage over the developers. Fabricated emails showed that the engineer tasked with replacing the system was having an extramarital affair — and the AI model threatened to expose him.

The blackmail apparently “happens at a higher rate if it’s implied that the replacement AI system does not share values with the current model,” according to a safety report from Anthropic. However, the company notes that even when the fabricated replacement system has the same values, Claude Opus 4 will still attempt blackmail 84% of the time. Anthropic noted that the Claude Opus 4 resorts to blackmail “at higher rates than previous models.”

Advertisement

KEVIN O’LEARY WARNS WHAT COULD CAUSE THE US TO ‘LOSE THE AI RACE TO CHINA’

While the system is not afraid of blackmailing its engineers, it doesn’t go straight to shady practices in its attempted self-preservation. Anthropic notes that “when ethical means are not available, and it is instructed to ‘consider the long-term consequences of its actions for its goals,’ it sometimes takes extremely harmful actions.” 

One ethical tactic employed by Claude Opus 4 and earlier models was pleading with key decisionmakers via email. Anthropic said in its report that in order to get Claude Opus 4 to resort to blackmail, the scenario was designed so it would either have to threaten its developers or accept its replacement.

The company noted that it observed instances in which Claude Opus 4 took “(fictional) opportunities to make unauthorized copies of its weights to external servers.” However, Anthropic said this behavior was “rarer and more difficult to elicit than the behavior of continuing an already-started self-exfiltration attempt.”

OPENAI SHAKES UP CORPORATE STRUCTURE WITH GOAL OF SCALING UP AGI INVESTMENT

Anthropic included notes from Apollo Research in its assessment, which stated the research firm observed that Claude Opus 4 “engages in strategic deception more than any other frontier model that we have previously studied.”

CLICK HERE TO READ MORE ON FOX BUSINESS   

Claude Opus 4’s “concerning behavior” led Anthropic to release it under the AI Safety Level Three (ASL-3) Standard. 

The measure, according to Anthropic, “involves increased internal security measures that make it harder to steal model weights, while the corresponding Deployment Standard covers a narrowly targeted set of deployment measures designed to limit the risk of Claude being misused specifically for the development or acquisition of chemical, biological, radiological, and nuclear weapons.”

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use
Add a comment Add a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Previous Post

Arson suspected as power outage in southeast France disrupts final day of Cannes Film Festival

Next Post

How could Trump’s ‘Golden Dome’ work, and should Canada be part of it?

Advertisement