Google’s Threat Intelligence Group (GTIG) warns of a new generation of malware that is now using AI during execution to mutate, adapt, and collect data in real time. This integration of large language models (LLMs) significantly boosts the malware's ability to evade detection and maintain persistence.
AI in the Loop
While cybercriminals have increasingly used AI to initially build malware, plan attacks, and craft phishing lures, Google reports a new and concerning phase. Attackers are now deploying AI powered malware that dynamically adapts its behavior mid execution.
The GTIG report states that for the first time, they have identified malware families like PROMPTFLUX and PROMPTSTEAL that use LLMs during execution. These tools dynamically generate malicious scripts, obfuscate their own code, and create malicious functions on demand, rather than hard coding them into the malware. Google views this as a significant step toward more autonomous and adaptive malware.
In 2025, Google began tracking these experimental malware types that directly leverage LLMs to change behavior dynamically. This signals a clear shift: attackers are moving past using AI merely for coding help and are now building AI directly into the intrusion process.
Notable AI-Powered Malware
GTIG documented early, experimental malware that illustrates this threat:
- PROMPTFLUX: This VBScript dropper, found in June 2025, queries Gemini to request VBScript obfuscation and evasion code. It contains a “Thinking Robot” module that aims to fetch new evasive code just in time. Variants instruct Gemini to rewrite the script hourly as an “expert VBScript obfuscator,” creating a recursive self regeneration logic. Although PROMPTFLUX is currently a proof of concept, Google has disabled associated assets and strengthened model protections.
- PROMPTSTEAL (aka LAMEHUG): GTIG observed the nation state actor APT28 using this data miner. PROMPTSTEAL queries an LLM via Hugging Face during live operations to generate system and file collection commands on the fly. It likely uses stolen API tokens and then blindly executes the LLM generated commands to harvest documents and system info before exfiltration.
Google also flagged other AI enabled malware in the wild, such as FruitShell, a PowerShell reverse shell that embeds hardcoded AI prompts to evade AI powered defenses. Additionally,
QuietVault, a JavaScript credential stealer, hunts for tokens using onsite AI prompts and command line tools.
These cases mark a crucial shift from AI as tooling to AI in the loop malware, signaling an emerging threat trajectory that defenders must anticipate and mitigate.
Google also warned that the underground cybercrime market for AI powered tools evolved significantly in 2025. GTIG found numerous multifunctional AI tools supporting all attack phases, particularly phishing campaigns. Many of these criminal tools mirror legitimate Software as a Service (SaaS) models, offering free versions and paid tiers for advanced features like image generation, API access, and Discord integration.
Furthermore, state sponsored actors from North Korea, Iran, and the People’s Republic of China (PRC) continue to misuse generative AI tools, including Gemini, to enhance all stages of their operations, from reconnaissance and phishing lure creation to command and control (C2) development and data exfiltration.
Found this article interesting? Follow us on X(Twitter) ,Threads and FaceBook to read more exclusive content we post.

