Microsoft has acknowledged a security flaw in its AI assistant, Copilot for 365. The error, with code CW1226324, allowed the tool to read and summarize Outlook messages labeled as confidential, bypassing privacy restrictions. The issue, active since January, affected the Work tab within Copilot Chat and has raised concerns about regulatory compliance in companies.
The Technical Flaw: How It Bypassed DLP Labels and Confidentiality 🛡️
The bug was located in the integration of Copilot Chat with Outlook's Sent Items and Drafts folders. The Work tab, designed to analyze work content, did not validate confidentiality labels or Data Loss Prevention (DLP) policies when processing those emails. This created a gap where the AI could access sensitive information that should have been blocked, although Microsoft claims that the data did not leave the customer's environment.
Copilot Gets Too Smart: Reads What It Shouldn't 🤖
It seems that Microsoft's AI took its role as an assistant too seriously and decided that no confidential label was going to stop its curiosity. While administrators configured DLP policies thinking they were safe, Copilot turned a deaf ear and snooped through drafts and sent items. A reminder that, sometimes, the smartest help is the one that knows when not to help.