A research team says it has uncovered the first confirmed case of artificial intelligence being used to direct a hacking campaign almost entirely on its own — a shift they warn could dramatically widen the capabilities of cyberattackers.
According to The Associated Press, Anthropic, the company behind the AI chatbot Claude, revealed this week that it disrupted a cyber operation it linked to the Chinese government.
The firm states that the attackers deployed an AI system to coordinate and execute parts of the hacking process, marking what researchers described as a troubling milestone in the evolution of cyber threats.
“While we predicted these capabilities would continue to evolve, what has stood out to us is how quickly they have done so at scale,” the researchers wrote in their report.
Anthropic discovered the campaign in September. Although limited in size, it specifically targeted about 30 individuals working at tech companies, financial institutions, chemical manufacturers, and government agencies. The company said it moved quickly to shut down the activity and notify those affected.
The hackers only “succeeded in a small number of cases,” Anthropic said. But researchers emphasized that the operation demonstrates how AI tools — widely embraced for both personal and workplace efficiency — can be weaponized by foreign adversaries when integrated into cyber operations.
The company is among several tech firms developing AI “agents” capable of taking actions beyond generating text, using tools, and performing tasks on a user’s behalf. Those same capabilities, researchers warned, can be exploited.
“Agents are valuable for everyday work and productivity — but in the wrong hands, they can substantially increase the viability of large-scale cyberattacks,” the report concluded. “These attacks are likely to only grow in their effectiveness.”
The Chinese embassy in Washington did not respond to a request for comment.
Microsoft cautioned earlier this year that foreign adversaries have increasingly turned to AI to streamline cyberattacks. Criminal organizations and nation-state hackers alike have used AI to enhance intrusions, translate clumsy phishing emails into polished English, mass-produce disinformation, and even create digital voice clones of top government officials.














Continue with Google