Criminal organizations and state-linked threat actors are actively using artificial intelligence to find security vulnerabilities in software systems, according to research from cybersecurity experts. The shift represents a fundamental change in how cybercrimes are conducted and who can conduct them.
AI agents deployed by these groups can identify approximately 77% of all security vulnerabilities present in real-world software. The systems work autonomously, scanning target infrastructure around the clock with high precision—a task that previously demanded extensive manual labor from highly skilled security experts.
The automation fundamentally changes the economics of cybercrime. Finding software vulnerabilities traditionally required deep technical expertise and many hours of painstaking manual work. AI agents have collapsed this barrier to entry, enabling criminal groups with less sophisticated technical capabilities to launch advanced, targeted attacks.
Criminal groups employ AI for two primary purposes: scanning target systems for exploitable security holes and developing malware. Both applications amplify the speed and scale at which attacks can be executed. A single AI agent working continuously can accomplish what would take a team of human hackers weeks or months to complete manually.
"It is no longer a question of whether AI can be used for hacking, but how widespread its use already is," researchers stated in their assessment of the threat. The technology represents a paradigm shift in cybercrime—not merely an incremental improvement in existing attack methods.
The real-world impact became visible in 2024 when the design and engineering firm Arup reported a multi-million-dollar fraud case to Hong Kong police. The case involved AI-generated deepfakes—fake voices and images—illustrating how criminal groups are already deploying multiple AI techniques in coordinated operations.
As AI vulnerability detection tools become more accessible and sophisticated, the potential for large-scale attacks grows exponentially. Criminal groups can now identify exploitable weaknesses in financial institutions, government agencies, infrastructure operators, and private companies with unprecedented efficiency. The vulnerabilities discovered can be weaponized immediately or sold to other threat actors on criminal forums.
Cybersecurity professionals warn that traditional defenses designed for slower, more manual attack methods are increasingly inadequate against AI-driven threats. The speed at which vulnerabilities can be found and exploited now outpaces the ability of many organizations to patch systems and apply security updates.
The emergence of AI-powered vulnerability hunting among criminal groups coincides with growing concerns about state-sponsored cyber operations. Advanced persistent threat (APT) groups linked to nation-states already employ sophisticated hacking techniques; AI agents provide them with an additional force multiplier capable of discovering vulnerabilities at scale and with minimal human oversight.
Experts emphasize that the criminal adoption of AI for cybersecurity purposes demands urgent attention from law enforcement, policymakers, and cybersecurity professionals. Organizations worldwide face an accelerated threat timeline as the technical barriers that once protected systems from less-sophisticated attackers continue to erode.
The technology itself is not inherently designed for malicious purposes—legitimate cybersecurity researchers and defensive teams also use AI to identify and fix vulnerabilities. However, the asymmetry favors attackers: criminal groups can rapidly shift tactics and identify new attack vectors faster than defenders can implement comprehensive fixes across their entire systems.
**Sources:**
https://www.kriminyt.dk/nyheder/kriminelle-bruger-ai-agenter-til-at-finde-saarbarheder-i-software
https://www.cs.aau.dk/cyberkriminelle-opruster-nu-skal-ai-bruges-til-at-stoppe-dem-n139588
https://cycode.com/blog/ai-cybersecurity-tools/
https://www.wiz.io/academy/ai-security/ai-security-tools
https://www.computerworld.dk/art/283939/ai-har-enorme-konsekvenser-for-cybersikkerhed-saadan-bruger-de-kriminelle-det