While this story is about Microsoft copilot, many AI implementations have security issues that could be serious, especially given the fact that these issues go unreported to the clients of said AI products.  This example from Microsoft is just the highest profile I have found so far.  Excerpt follows:

A significant security vulnerability has been discovered in Microsoft’s Copilot for M365 that allowed users, including potential malicious insiders, to access and interact with sensitive files without leaving any record in the official audit logs. After patching the flaw, Microsoft has reportedly decided against issuing a formal CVE or notifying its customers, leaving organizations unaware that their security logs from before the fix may be critically incomplete. The vulnerability, detailed by a security researcher at the tech company Pistachio, was remarkably simple to exploit. Under normal circumstances, when a user asks Copilot to summarize a file, the action is recorded in the M365 audit log, a crucial feature for security monitoring and compliance.  However, the researcher found that by simply adding a command for Copilot not to provide a reference link to the summarized file, the AI assistant would act without triggering any log entry. This effectively creates a digital blind spot for security teams. A malicious employee could use this method to access and exfiltrate confidential data, intellectual property, or personal information right before leaving a company, all without a trace. For organizations in regulated industries like healthcare and finance, which rely on the integrity of audit logs to meet compliance standards like HIPAA, the implications are severe.

This means if you are a regulated industry like HIPAA, or FINRA, or others that require detailed logging of all file activities, during a compliance audit, this would be a major violation and could lead to severe penalties or unauthorized data disclosure, just to name a couple. A more severe issue could have to data being stolen using the copilot agent with no user interaction and no traces left behind.  This one was assigned a vulnerability number and publicly disclosed by Microsoft.

When choosing to use AI in your business make sure you know the possible threats using such a rapidly evolving product like AI can present, and ways you can make sure you know ways you can either protect yourself from these threats, can prepare for them in your implementation plan or go another way.  Contact us for an evaluation of your technology plan, including AI threats for your business.

 

Skip to content