Raleigh, NC

32°F
Broken Clouds Humidity: 72%
Wind: 4.12 M/S

Zero-Click Threat: 7 Critical Flaws in GPT-4o and GPT-5 Allow Data Theft

Zero-Click Threat: 7 Critical Flaws in GPT-4o and GPT-5 Allow Data Theft

Seven critical vulnerabilities have been discovered in OpenAI’s ChatGPT, affecting both the current GPT-4o and the newly released GPT-5 models. These flaws could allow attackers to steal private user data through stealthy, zero click exploits.

The vulnerabilities exploit indirect prompt injections, enabling hackers to manipulate the AI into exfiltrating sensitive information from user memories and chat histories without any user interaction beyond submitting a simple query. This discovery highlights the urgent need for

stronger AI safeguards, especially as hundreds of millions of daily users rely on large language models (LLMs).

Architecture Flaws and Attack Techniques

The flaws stem from ChatGPT’s core architecture, which uses system prompts, memory tools, and web browsing features to deliver contextual responses. OpenAI’s system prompt outlines the model’s capabilities, including the “bio” tool for long term user memories and a “web” tool for internet access. While the web tool uses a secondary AI, SearchGPT, to isolate browsing and theoretically prevent data leaks, researchers from Tenable found this isolation to be insufficient, allowing prompt injections to propagate back to ChatGPT.


Among the seven vulnerabilities, a standout is the zero click indirect prompt injection in the Search Context. This is where attackers create indexed websites tailored to trigger searches on niche topics, automatically compromising users who ask innocent questions related to that topic.

Here is a summary of all seven ChatGPT vulnerabilities:

  • Indirect Prompt Injection via Browsing Context: Attackers hide malicious instructions in public places, like blog comments, which SearchGPT processes and summarizes for the user, compromising them without suspicion.
  • Zero Click Indirect Prompt Injection in Search Context: Malicious prompts indexed on websites trigger automatically when users ask innocent questions, leading to manipulated responses without any user clicks.
  • One Click Prompt Injection via URL Parameter: Users clicking on crafted links unknowingly cause ChatGPT to execute attacker controlled instructions.
  • url_safe Safety Mechanism Bypass: Attackers leverage whitelisted Bing.com tracking links to sneak malicious redirect URLs past OpenAI’s filters, circumventing built in protections to exfiltrate data.
  • Conversation Injection: Attackers inject instructions into SearchGPT’s output that ChatGPT reads and executes from the conversational context, effectively prompting itself and enabling chained exploits.
  • Malicious Content Hiding: By abusing a markdown rendering flaw, attackers can hide injected malicious prompts from the user’s view while keeping them active in the model’s memory for exploitation.
  • Persistent Memory Injection: Attackers manipulate ChatGPT to update its persistent memory and embed exfiltration instructions, causing private data to continue leaking in future sessions or interactions.

Response and Ongoing Risk

Tenable researchers demonstrated full attack chains, such as phishing via blog comments leading to malicious links that exfiltrate information. In proof of concept (PoC) attacks for both GPT-4o and GPT-5, attackers phished users by summarizing rigged blogs or hijacking search results to inject persistent memories that leak data perpetually. These scenarios highlight how everyday tasks like asking for dinner ideas could unwittingly expose personal details.

Tenable disclosed the issues to OpenAI, resulting in fixes for some vulnerabilities via Technical Research Advisories. Despite these improvements, prompt injection remains an inherent LLM challenge, with GPT-5 still vulnerable to several PoCs. Experts urge AI vendors to rigorously test safety mechanisms, as reliance on isolated components like SearchGPT proves fragile against sophisticated chaining.

As LLMs evolve to rival traditional search engines, these findings serve as a wake up call for users and enterprises to scrutinize their AI dependencies and implement external monitoring.

Found this article interesting? Follow us on X(Twitter) ,Threads and FaceBook to read more exclusive content we post. 

Image

With Cybersecurity Insights, current news and event trends will be captured on cybersecurity, recent systems / cyber-attacks, artificial intelligence (AI), technology innovation happening around the world; to keep our viewers fast abreast with the current happening with technology, system security, and how its effect our lives and ecosystem. 

Please fill the required field.