CyberIntel ⬡ News
★ Saved ◆ Cyber Reads
← Back ◐ Insider Threat & DLP Mar 16, 2026

Microsoft 365 Copilot Bug Circumvented DLP Controls - eSecurity Planet

eSecurity Planet Archived Mar 16, 2026 ✓ Full text saved

Microsoft 365 Copilot Bug Circumvented DLP Controls eSecurity Planet

Full text archived locally
✦ AI Summary · Claude Sonnet


    facebook linkedin x NEWSLETTER BEST PRODUCTS RESOURCES NETWORKS CLOUD THREATS TRENDS ENDPOINT APPLICATIONS COMPLIANCE THREATS SHARE Microsoft 365 Copilot Bug Circumvented DLP Controls Microsoft confirmed a Copilot Chat bug that summarized confidential emails despite active DLP controls, raising AI governance concerns in Microsoft 365. WRITTEN BY KEN UNDERHILL FEB 20, 2026 eSecurity Planet content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More Microsoft has confirmed a bug in Microsoft 365 Copilot Chat that allowed the AI assistant to summarize emails labeled as confidential, even when sensitivity labels and data loss prevention (DLP) policies were in place.  The issue, first identified on Jan. 21, 2026 and tracked internally as CW1226324, impacted Copilot’s “work tab” chat feature. “Without proper due diligence on the data handling by the AI, sensitive information may not be treated with the rigor it should,” said Melissa Ruzzi, director of AI at AppOmni in an email to eSecurityPlanet. She added, “To mitigate these threats, the first important action is to make sure employees are trained on best practices for using AI.”  Melissa explained, “Give them guidelines on what they should pay attention to, empowering them to not only properly use AI, but also raise concerns as issues arise. This can help detect problems early.” Inside the Microsoft Copilot Bug The incident highlights the growing complexity of governing AI capabilities embedded within modern SaaS platforms.  As organizations increasingly integrate generative AI into core productivity workflows, traditional security and compliance controls must evolve to account for how large language models (LLMs) access, process, and summarize enterprise data. Copilot Chat — Microsoft’s AI-powered assistant integrated across Outlook, Word, Excel, PowerPoint, and OneNote — is designed to help users surface, synthesize, and contextualize organizational information.  By drawing on content such as emails, documents, meeting notes, and other Microsoft 365 data, Copilot enables users to generate summaries, draft responses, and extract insights from large volumes of information.  Its value proposition depends on broad contextual access to enterprise data — but that same breadth of access also increases the importance of strict policy enforcement. ADVERTISEMENT How the Copilot Bug Bypassed DLP Controls According to Microsoft, the issue stemmed from an unspecified code error affecting Copilot Chat’s “Work” tab.  The flaw allowed the assistant to process and summarize emails stored in Sent and Draft folders even when those messages were protected by confidentiality (sensitivity) labels and governed by active DLP policies.  In effect, Copilot analyzed and summarized content that organizations had explicitly marked as restricted and expected to be excluded from automated AI processing. Why the Incident Raises Compliance Concerns “This did not provide anyone access to information they weren’t already authorized to see,” a Microsoft spokesperson said in a message to BleepingComputer.  In other words, users could only see summaries of content they already had permission to access within Microsoft 365.  However, the behavior deviated from Copilot’s intended design, which is meant to respect sensitivity labels and DLP controls by excluding protected content from AI-driven retrieval and summarization workflows. The situation reflects a failure in policy enforcement within an AI-driven workflow, where established security controls did not operate as intended once AI processing was introduced.  When there is any misalignment between access controls, data protection policies, and the logic governing AI retrieval and summarization, organizations face heightened compliance, governance, and regulatory risks.  Microsoft began rolling out a fix in early February 2026 and has stated that it continues to monitor deployment to ensure the issue is fully resolved.   ADVERTISEMENT Mitigating AI Data Security Risks As AI-powered assistants become embedded in everyday productivity workflows, organizations must ensure that governance and security controls evolve alongside them.  Traditional safeguards such as DLP policies, sensitivity labels, and access restrictions are only effective if they are consistently enforced within AI-driven features like Copilot.  Proactive validation, monitoring, and risk management are essential to prevent unintended exposure or misuse of sensitive information. Validate that DLP policies and sensitivity labels are properly enforced within Copilot by testing how confidential content is handled across email and document workflows. Restrict Copilot access using role-based controls and conditional access policies to limit AI processing to appropriate users, devices and trusted environments. Review and harden Copilot configuration settings to ensure alignment with corporate data protection, compliance and retention policies. Enable comprehensive logging and integrate Copilot telemetry into SIEM or other monitoring platforms to detect anomalous AI-driven data access or summarization patterns. Isolate highly sensitive workloads and apply data minimization practices to reduce unnecessary exposure of regulated or confidential content to AI tools. Incorporate AI-enabled SaaS features into formal risk assessments, vulnerability management programs and adversarial testing exercises to validate enforcement boundaries. Test incident response plans and build playbooks around scenarios with unintended AI processing of sensitive data. Together, these measures help limit the blast radius of unintended AI data exposure while strengthening organizational resilience against emerging risks in AI-enabled SaaS environments. ADVERTISEMENT AI as a New Data Risk Layer The Copilot incident highlights that AI features function as additional data-processing layers within enterprise systems and should be governed accordingly.  As generative AI becomes more integrated into SaaS platforms, security teams need to ensure that existing controls — such as policy enforcement and monitoring — are consistently applied to AI-driven workflows, not just traditional user activity. This shift in AI use underscores the need for zero-trust solutions that continuously verify access and enforce granular controls across users, devices, applications, and data. KEN UNDERHILL Ken Underhill is an award-winning cybersecurity professional, bestselling author, and seasoned IT professional. He holds a graduate degree in cybersecurity and information assurance from Western Governors University and brings years of hands-on experience to the field. RECOMMENDED FOR YOU... ARTIFICIAL INTELLIGENCE AI Email Summaries Create a New Phishing Attack Surface Researchers found that hidden email instructions can manipulate Microsoft Copilot summaries to insert phishing-style alerts. KEN UNDERHILL MAR 16, 2026 THREATS Microsoft Issues Hotpatch for Windows 11 RRAS RCE Bugs Microsoft released an emergency hotpatch for Windows 11 to fix critical RRAS remote code execution flaws. KEN UNDERHILL MAR 16, 2026 THREATS AiLock Ransomware Claims England Hockey Data Breach England Hockey is investigating a potential cyberattack claimed by the AiLock ransomware group. KEN UNDERHILL MAR 13, 2026 THREATS Starbucks HR Portal Breach Exposes Employee Information A phishing attack on Starbucks’ HR portal exposed sensitive data for hundreds of employees. KEN UNDERHILL MAR 13, 2026 eSecurity Planet is a leading resource for IT professionals at large enterprises who are actively researching cybersecurity vendors and latest trends. eSecurity Planet focuses on providing instruction for how to approach common security challenges, as well as informational deep-dives about advanced cybersecurity topics. facebook linkedin x COMPANY About us Contact us Advertise with us CATEGORIES Best Products Resources Networks Cloud Threats Trends Endpoint Applications Compliance Property of TechnologyAdvice. © 2026 TechnologyAdvice. All Rights Reserved Advertiser Disclosure: Some of the products that appear on this site are from companies from which TechnologyAdvice receives compensation. This compensation may impact how and where products appear on this site including, for example, the order in which they appear. TechnologyAdvice does not include all companies or all types of products available in the marketplace. TERMS OF SERVICE PRIVACY POLICY CALIFORNIA - DO NOT SELL MY INFORMATION We use cookies and other data collection technologies to provide the best experience for our customers. You may request that your data not be shared with third parties here: Do Not Sell My Data.
    💬 Team Notes
    Article Info
    Source
    eSecurity Planet
    Category
    ◐ Insider Threat & DLP
    Published
    Mar 16, 2026
    Archived
    Mar 16, 2026
    Full Text
    ✓ Saved locally
    Open Original ↗