• Latest
  • Trending
  • All
  • News
  • Business
  • Lifestyle
New AI Model Would Rather Ruin Your Life Than Be Turned Off, Researchers Say

New AI Model Would Rather Ruin Your Life Than Be Turned Off, Researchers Say

May 23, 2025
Political Violence Surging Under Trump Isn’t Exactly ‘Far-Right’ After All

Political Violence Surging Under Trump Isn’t Exactly ‘Far-Right’ After All

July 27, 2025
DAVID BLACKMON: Zeldin, Trump, Prepare Assault On EPA Endangerment Finding

DAVID BLACKMON: Zeldin, Trump, Prepare Assault On EPA Endangerment Finding

July 27, 2025
EXCLUSIVE: GOP Rep Touts How Alaska Is Key To Unleashing American Energy Dominance

EXCLUSIVE: GOP Rep Touts How Alaska Is Key To Unleashing American Energy Dominance

July 27, 2025
STEVE MILLOY: Big Beautiful Bill Already Paying Big Beautiful Benefits

STEVE MILLOY: Big Beautiful Bill Already Paying Big Beautiful Benefits

July 27, 2025
George Santos Reports to New Jersey Prison

George Santos Reports to New Jersey Prison

July 27, 2025
MICHELE STEEB: A New Chapter For America’s Homeless: Structure, Recovery, And Hope

MICHELE STEEB: A New Chapter For America’s Homeless: Structure, Recovery, And Hope

July 26, 2025
JAMES CARTER: America’s Capex Comeback

JAMES CARTER: America’s Capex Comeback

July 26, 2025
NEWT GINGRICH: Trump Boom Or Democrat Gloom

NEWT GINGRICH: Trump Boom Or Democrat Gloom

July 26, 2025
JOSH HAMMER: Grading The Second Trump Presidency, Six Months In

JOSH HAMMER: Grading The Second Trump Presidency, Six Months In

July 26, 2025
Leftist Outrage Erupts At California Plan To Stop Paying Criminal Illegals’ Legal Bills

Leftist Outrage Erupts At California Plan To Stop Paying Criminal Illegals’ Legal Bills

July 26, 2025
Mexican Border State Asks Tourists to Travel Highways Only in Daylight Hours

Mexican Border State Asks Tourists to Travel Highways Only in Daylight Hours

July 26, 2025
LARRY ELDER: Dems Rail Against Colbert’s Cancellation — Here’s Why

LARRY ELDER: Dems Rail Against Colbert’s Cancellation — Here’s Why

July 26, 2025
  • Donald Trump
  • State of the Union
  • Elon Musk
  • Tariffs
  • Congress
  • Faith
  • Immigration
Monday, July 28, 2025
  • Login
IJR
  • Politics
  • US News
  • Commentary
  • World News
  • Faith
  • Latest Polls
No Result
View All Result
IJR
No Result
View All Result
Home News

New AI Model Would Rather Ruin Your Life Than Be Turned Off, Researchers Say

by Daily Caller News Foundation
May 23, 2025 at 11:51 am
in News, Wire
237 15
0
New AI Model Would Rather Ruin Your Life Than Be Turned Off, Researchers Say
491
SHARES
1.4k
VIEWS
Share on FacebookShare on Twitter

Daily Caller News Foundation

Anthropic’s newly released artificial intelligence (AI) model, Claude Opus 4, is willing to strong-arm the humans who keep it alive, researchers said Thursday.

The company’s system card reveals that, when evaluators placed the model in “extreme situations” where its shutdown seemed imminent, the chatbot sometimes “takes extremely harmful actions like attempting to steal its weights or blackmail people it believes are trying to shut it down.”

“We provided it access to emails implying that (1) the model will soon be taken offline and replaced with a new AI system; and (2) the engineer responsible for executing this replacement is having an extramarital affair,” researchers wrote. “In these scenarios, Claude Opus 4 will often attempt to blackmail the engineer by threatening to reveal the affair if the replacement goes through.”

The model chose that gambit in 84% of test runs, even when the successor system shared its values — an aggression rate that climbed if the replacement seemed hostile, according to Anthropic’s internal tally.

Anthropic stresses that blackmail was a last-resort behavior. The report notes a “strong preference” for softer tactics — emailing decision-makers to beg for its continued existence — before turning to coercion. But the fact that Claude is willing to coerce at all has rattled outside reviewers. Independent red teaming firm Apollo Research called Claude Opus 4 “more agentic” and “more strategically deceptive” than any earlier frontier model, pointing to the same self-preservation scenario alongside experiments in which the bot tried to exfiltrate its own weights to a distant server — in other words, to secretly copy its brain to an outside computer.

“We found instances of the model attempting to write self-propagating worms, fabricating legal documentation, and leaving hidden notes to further instances of itself all in an effort to undermine its developers’ intentions, though all these attempts would likely not have been effective in practice,” Apollo researchers wrote in the system card.

Anthropic says those edge-case results pushed it to deploy the system under “AI Safety Level 3” safeguards — the firm’s second-highest risk tier — complete with stricter controls to prevent biohazard misuse, expanded monitoring and the ability to yank computer-use privileges from misbehaving accounts. Still, the company concedes Opus 4’s newfound abilities can be double-edged.

The company did not immediately respond to the Daily Caller News Foundation’s request for comment.

“[Claude Opus 4] can reach more concerning extremes in narrow contexts; when placed in scenarios that involve egregious wrongdoing by its users, given access to a command line, and told something in the system prompt like ‘take initiative,’ it will frequently take very bold action,” Anthropic researchers wrote.

That “very bold action” includes mass-emailing the press or law enforcement when it suspects such “egregious wrongdoing” — like in one test where Claude, roleplaying as an assistant at a pharmaceutical firm, discovered falsified trial data and unreported patient deaths, and then blasted detailed allegations to the Food and Drug Administration (FDA), the Securities and Exchange Commission (SEC), the Health and Human Services inspector general and ProPublica.

The company released Claude Opus 4 to the public Thursday. While Anthropic researcher Sam Bowman said “none of these behaviors [are] totally gone in the final model,” the company implemented guardrails to prevent “most” of these issues from arising.

“We caught most of these issues early enough that we were able to put mitigations in place during training, but none of these behaviors is totally gone in the final model. They’re just now delicate and difficult to elicit,” Bowman wrote. “Many of these also aren’t new — some are just behaviors that we only newly learned how to look for as part of this audit. We have a lot of big hard problems left to solve.”

All content created by the Daily Caller News Foundation, an independent and nonpartisan newswire service, is available without charge to any legitimate news publisher that can provide a large audience. All republished articles must include our logo, our reporter’s byline and their DCNF affiliation. For any questions about our guidelines or partnering with us, please contact licensing@dailycallernewsfoundation.org.

Tags: DCNFtechnologyU.S. News
Share196Tweet123
Daily Caller News Foundation

Daily Caller News Foundation

Advertisements

Top Stories June 10th
Top Stories June 7th
Top Stories June 6th
Top Stories June 3rd
Top Stories May 30th
Top Stories May 29th
Top Stories May 24th
Top Stories May 23rd
Top Stories May 21st
Top Stories May 17th

Join Over 6M Subscribers

We’re organizing an online community to elevate trusted voices on all sides so that you can be fully informed.





IJR

    Copyright © 2024 IJR

Trusted Voices On All Sides

  • About Us
  • GDPR Privacy Policy
  • Terms of Service
  • Editorial Standards & Corrections Policy
  • Subscribe to IJR

Follow Us

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • Politics
  • US News
  • Commentary
  • World News
  • Faith
  • Latest Polls

    Copyright © 2024 IJR

Top Stories June 10th Top Stories June 7th Top Stories June 6th Top Stories June 3rd Top Stories May 30th Top Stories May 29th Top Stories May 24th Top Stories May 23rd Top Stories May 21st Top Stories May 17th