We could all utilize our own custom-built chatbot, right? Well, let’s rejoice, because Microsoft’s Copilot Studio is a handy tool for the less technical (those of us who don’t dream in Fortran) to create our own chatbot. The idea is to make it straightforward for most businesses and organizations to create a chatbot based on their internal documents and data.
You could imagine a game developer using a chatbot to assist players ask questions about everything from how to complete a game to using the best settings and troubleshooting technical issues. But there’s inevitably a catch.
According to AI security specialist Zenita, Copilot Studio and the chatbots it creates are a security nightmare (via Register). Zenity CTO Michael Bargury hosted a recent session at the Black Hat security conference where he delved into the horrors that happen if you allow Copilot access to your data to create a chatbot.
Apparently, it all comes down to Copilot Studio’s default security settings, which are reportedly inadequate. In other words, the danger is that you utilize this super-easy Copilot Studio tool to create a super-useful tool that customers or employees can utilize for natural language queries, only to discover that it opens up a huge door to exploits.
Bargury showed how a cybercriminal can insert malicious code into a benign-looking email, instruct the Copilot bot to “examine” it, and — presto — the malicious code is injected.
Another example is Copilot, which presented users with a counterfeit Microsoft login page where the victims’ credentials were collected, all displayed within the Copilot chat bot itself (via Technical purpose).
What’s more, Zenity claims that the average huge U.S. enterprise already has 3,000 of these bots up and running. Scarily enough, it claims that 63% of them can be found online. If true, that means the average Fortune 500 company has around 2,000 bots ready and willing to dump critical, confidential corporate information.
“We scoured the internet and found tens of thousands of these bots,” Bargury said. He says the original default settings for Copilot Studio automatically published bots to the network without requiring authentication to access them. That was fixed after Zenity reported the issue to Microsoft, but it doesn’t assist any bots built before the update.
“There’s a fundamental problem here,” Bargury says. “When you give AI access to data, that data now becomes an attack surface for rapid injection.” In miniature, Bargury says, publicly available chatbots are inherently threatening.
There are basically two problems here. On the one hand, bots need a certain level of autonomy and flexibility to be useful. That’s strenuous to fix. The second problem is what seems like pretty obvious oversights on Microsoft’s part.
This last issue shouldn’t be surprising given the confusion surrounding Windows’ Copilot Recall feature, which involved constantly taking screenshots of user activity and then storing them without any protection.
Microsoft answered this question in a rather sarcastic manner in the Register.
“We appreciate Michael Bargury’s work in identifying and responsibly reporting these techniques through coordinated disclosure. We investigate these reports and continue to improve our systems to proactively identify and mitigate these types of threats and help protect customers.
“Like other post-compromise techniques, these methods require prior system compromise or social engineering. Microsoft Security provides a robust set of security controls that customers can use to address these threats, and we are committed to continually improving our security controls as this technology evolves.”
Like many things AI-related, security seems like another area that will be a minefield of unintended consequences and collateral damage. Rather, it seems like we’re a long way from the prospect of secure, reliable AI that does what we want it to do, and only what we want it to do.