Microsoft has confirmed a significant security lapse in its AI-powered productivity tool, Microsoft 365 Copilot Chat, which mistakenly accessed and summarized confidential user emails. The incident, first reported by technology publication Bleeping Computer, exposed protected content from enterprise users’ draft and sent folders within Outlook desktop applications.
The tech giant markets Copilot Chat as a secure generative AI solution for workplace environments, integrated across Microsoft’s ecosystem including Outlook and Teams. However, a configuration error caused the system to bypass established sensitivity labels and data loss prevention policies designed to prevent unauthorized access to confidential information.
Microsoft responded swiftly to the breach, deploying a worldwide update to address what it described as a ‘code issue.’ Company representatives emphasized that the incident did not grant unauthorized access to protected data, stating: ‘Our access controls and data protection policies remained intact, though this behavior did not meet our intended Copilot experience.’
The notification regarding this security flaw appeared on multiple support platforms, including the IT support dashboard for England’s National Health Service (NHS), suggesting potential impact on healthcare organizations. Microsoft assured that patient data remained secure throughout the incident.
Industry experts have expressed concern about the accelerating pace of AI implementation in corporate environments. Nader Henein, data protection and AI governance analyst at Gartner, commented that ‘this sort of fumble is unavoidable’ given the rapid deployment of novel AI capabilities. He noted that organizations lack adequate tools to manage and secure each new feature effectively.
University of Surrey cybersecurity expert Professor Alan Woodward highlighted the inherent risks of rapidly developed AI tools, stating: ‘There will inevitably be bugs in these tools, not least as they advance at break-neck speed, so even though data leakage may not be intentional it will happen.’ He advocated for privacy-by-default designs and opt-in-only approaches to such technologies.
The incident, which Microsoft first identified in January, underscores the broader challenges facing organizations as they integrate increasingly sophisticated AI tools into sensitive work environments while maintaining data security protocols.
