AI in the Wrong Hands: 6 New Threats from Google's Latest Report
Since our last update in early 2026, the landscape of AI-powered cyber threats has shifted from experimental to industrial scale. Google Threat Intelligence Group (GTIG) now tracks adversaries using generative models for everything from crafting zero-day exploits to launching fully automated malware campaigns. In this listicle, we break down six critical developments that every security professional needs to know—from state-backed groups to criminal operators—based on Mandiant incident responses, Gemini analysis, and GTIG's proactive research.
1. AI-Crafted Zero-Day Exploits: A Criminal First
For the first time, GTIG observed a threat actor using a zero-day exploit that was almost certainly developed with AI assistance. The criminal group planned to deploy it in a mass exploitation event, but our counter-discovery likely thwarted their attack. Meanwhile, PRC- and DPRK-linked actors are actively investing in AI-driven vulnerability discovery, accelerating the timeline from bug to weapon. This marks a significant shift: AI now reduces the skill barrier for creating sophisticated exploits, broadening the pool of actors capable of launching zero-day attacks.

2. AI-Accelerated Malware and Defense Evasion
Russia-nexus adversaries are leveraging AI to build infrastructure suites and polymorphic malware at unprecedented speed. These AI-driven coding cycles enable rapid creation of obfuscation networks and decoy logic that evade traditional detection. For example, we have seen malware that dynamically alters its code structure between executions—generated on the fly by language models. This makes signature-based defenses nearly useless and forces defenders to rely on behavioral analysis and anomaly detection.
3. Autonomous Malware: PROMPTSPY Goes Live
The emergence of AI-native malware like PROMPTSPY represents a paradigm shift. Rather than following fixed instructions, this malware uses a language model to interpret system states and generate commands in real time, orchestrating attacks autonomously. Our analysis reveals new capabilities: it can manipulate victim environments, pivot to lateral targets, and adapt to defenses without human intervention. This offloads operational burden from attackers, allowing scaled, customized campaigns that were previously resource-intensive.
4. AI as a Research Assistant and Fabrication Engine
Adversaries now treat AI as a high-speed research assistant for the entire attack lifecycle—from reconnaissance to payload development. But the most visible impact is in information operations (IO). The pro-Russia campaign Operation Overload used generative models to flood platforms with synthetic media and deepfakes, fabricating digital consensus on a massive scale. AI enables content creation at near-zero cost, making it easier than ever to manipulate public opinion and disrupt democratic processes.

5. Obfuscated LLM Access: Behind the Scenes
To bypass usage limits and avoid attribution, threat actors have built professionalized middleware and automated registration pipelines. These systems purchase premium-tier access to models using stolen credentials or temporary identities, then resell access on underground forums. Some even exploit free trial abuse through programmatic account cycling. This illicit infrastructure fuels large-scale misuse of LLMs while creating a secondary economy around anonymized access.
6. Supply Chain Attacks Target AI Dependencies
Groups like TeamPCP (aka UNC6780) have shifted focus to attacking AI development environments and software supply chains. By compromising dependencies—such as Python packages, model weights, or training pipelines—they gain initial access to high-value targets. These supply chain attacks can lead to multiple outcomes, from data exfiltration to backdoor insertion in AI models. As organizations rush to adopt AI, securing the entire supply chain becomes critical.
Conclusion: The Dual Nature of AI Threats
AI is both an engine for adversary operations and a target for attacks. The six developments above show that adversaries are moving fast—automating discovery, evasion, and manipulation at scale. Defenders must respond with equal agility: investing in AI-driven defense tools, hardening supply chains, and monitoring for autonomous malware. The era of industrial-scale AI misuse has arrived, and staying ahead requires continuous adaptation.
Related Articles
- Securing vSphere Against BRICKSTORM: Key Questions and Answers
- 6 Critical Lessons from the KICS and Trivy Supply Chain Attacks of 2026
- AI in Cyber Threats: How Adversaries Weaponize Generative Models
- April 2026 Patch Tuesday: Microsoft Fixes Record 167 Flaws, Including Actively Exploited SharePoint Zero-Day and Publicly Known Defender Bug
- Understanding the TrueChaos Campaign: CVE-2026-3502 and Its Exploitation Against Government Targets
- Why the Old App Security Playbook Is Obsolete: A Q&A
- 10 Critical Insights Into the PAN-OS Captive Portal Zero-Day (CVE-2026-0300)
- Urgent: Cisco Catalyst SD-WAN Controller Under Active Zero-Day Attack – Critical Auth Bypass Allows Full Device Takeover