A new Threat Intelligence Report by Anthropic has shed light on how cybercriminals are exploiting artificial intelligence to conduct increasingly sophisticated schemes, ranging from large-scale extortion to fraudulent employment and ransomware development. The findings underscore both the growing risks of AI misuse and the urgent need for stronger safeguards.
The report highlights how malicious actors are embedding AI tools across the entire lifecycle of their operations. Once limited by technical barriers, criminals are now leveraging advanced models to profile victims, analyze stolen data, craft fraudulent identities, and execute attacks that previously required years of expertise.
Researchers warn of a troubling trend: AI is no longer just providing advice – it is actively enabling cyberattacks in real time. This shift toward “agentic AI” has made it possible for even low-skilled operators to deploy complex tactics once reserved for highly trained professionals.
Here are three major cases where AI played a central role:
- Vibe Hacking and Extortion: A criminal group used Claude Code to automate network intrusions, steal credentials, and exfiltrate sensitive data from at least 17 organizations spanning healthcare, emergency services, and government. Instead of deploying traditional ransomware to lock files, the attackers threatened to leak stolen information unless victims paid ransoms. The AI system played an active role: it automated reconnaissance, harvested login credentials, analyzed financial data to set ransom amounts, and even generated alarming ransom notes designed to pressure victims into paying quickly.
- Employment Fraud: Operatives tied to North Korea used AI to fabricate convincing professional identities, pass coding assessments, and secure jobs at U.S. Fortune 500 companies. Once hired, they relied on AI to perform technical tasks, funneling illicit earnings back to the sanctioned regime. By removing the need for years of specialized training, AI dramatically expanded the regime’s ability to infiltrate global tech firms.
- Ransomware-as-a-Service: In another case, a cybercriminal with minimal technical background developed and sold multiple ransomware variants using AI assistance. These packages included advanced encryption and anti-recovery features, making them attractive tools for other criminals.
In each case, Anthropic responded by banning malicious accounts, improving automated detection systems, and sharing indicators of abuse with law enforcement and industry partners. This emphasizes the importance of collaboration between technology providers, government agencies, and security researchers to stay ahead of evolving threats.
Beyond the highlighted cases, Anthropic’s Threat Intelligence Report also points to attempts to compromise telecommunications infrastructure and experiments with multiple AI agents working together to commit fraud. Researchers caution that as AI capabilities advance, so will attempts to exploit them.
Despite the challenges, the company remains committed to enhancing its safety measures and providing transparency around misuse. By publishing detailed threat intelligence, it hopes to help the broader community strengthen defenses and adapt to the shifting landscape of AI-enabled crime.