Cherepanov and Strýček believed their discovery, which they named PromptLock, marked a turning point in creative AI, showing how the technology could be leveraged to create highly flexible malware attacks. They published one. Blog post Announcing that they had uncovered the first instance of AI-powered ransomware, it quickly became its target. Widespread global The media attention.
But the threat was not as dramatic as it first appeared. The day after the blog post went live, a team of researchers from New York University accepted responsibility, Explaining that the malware was not actually a complete attack left in the wild, but rather a research project, designed simply to prove that possible To automate every step of the ransomware campaign—which, they said, they had.
Prompt Lock may have turned out to be an educational project, but the real bad guys are Using the latest AI tools. Just as software engineers are using artificial intelligence to help write code and test for bugs, hackers are using these tools to reduce the time and effort required to launch an attack, lowering the barriers to entry for less experienced attackers.
Lorenzo Cavallaro, professor of computer science at University College London, says the possibility that cyber attacks will become more common now and more effective over time is not a remote possibility but “an absolute reality”.
Some in Silicon Valley have warned that AI is on the verge of being able to carry out fully automated attacks. But most security researchers say that claim is overblown. “For some reason, everyone’s just focused on this idea of malware, like AI superhackers, which is completely ridiculous,” says Marcus Hutchins, principal threat researcher at security company Expel and best known in the security world for taking down a massive global ransomware attack called WannaCry in 2017.
Instead, experts say, we should pay closer attention to the more immediate threats posed by AI, which is already accelerating and increasing the volume of scams. Criminals are increasingly exploiting the latest deepfake technologies to impersonate people and dupe victims out of huge sums of money. These AI-enhanced cyberattacks are only set to become more frequent and more destructive, and we need to be prepared.
Spam and beyond
Attackers began adopting generative AI tools almost immediately after ChatGPT hit the scene in late 2022. These efforts led, as you can imagine, to the creation of spam and more. Last year, a report from Microsoft said that in the year to April 2025, the company had blocked $4 billion worth of scams and fraudulent transactions, “many potentially with the help of AI content”.
According to estimates by researchers at Columbia University, the University of Chicago, and Barracuda Networks, at least half of all spam emails are now generated using LLM. Analyzed About 500,000 malicious messages were collected before and after the launch of ChatGPT. They also found evidence that AI is increasingly being deployed in more sophisticated schemes. They looked at targeted email attacks, which impersonate a trusted persona to trick an employee inside an organization out of funds or sensitive information. By April 2025, they found, at least 14 percent of such focused email attacks were made using LLMs, up from 7.6 percent in April 2024.