Embrace, but with guardrails, is the approach companies should take to tackle the growing use of unauthorised ‘Shadow AI’ tools by employees according to CIOs at two major SaaS companies.
With 78% of AI users using their own AI tools at work, according to a Microsoft survey, it's obvious employees don’t feel their needs are being met, according to Eric Johnson, CIO of incident management platform PagerDuty.
Speaking at Zendesk’s relate event in Las Vegas, he said companies shouldn’t attempt to become a “police state” eliminating all Shadow AI cases. Rather, “You have to embrace it because it’s coming whether you like it or not.”
He said: “My approach to this now at PagerDuty is how do we actually have those conversations with the [wider] business to find out why those needs aren’t being met and start to have some open dialogue around what we can do to actually help support [employees].”
While Shadow IT, as it used to be known, is not a new issue by any means, recent surveys have found Shadow AI use to be skyrocketing as GenAI becomes more embedded in enterprise culture. Zendesk's own trends survey reported an 188% year-on-year increase in use by customer service agents.
Similarly, Johnson’s presentation partner, Zendesk CIO Craig Flowers told The Stack his company had adopted an approach that analysed the potential “impact of risk” in Shadow AI cases, but “with a bias towards education.”
See also: Shadow IT squared? What Fortune 1000 CISOs really think about GenAI
However, with a Zendesk survey finding 86% of customer service agents were using unauthorised AI tools on customer data, it’s clear a complete “choose your own” AI strategy would inevitably lead to poor, and potentially illegal, privacy practices.
Johnson and Flowers are clearly aware of the issue. “If you’re not setting yourself up for the guardrails first … you’re going to have a lot of problem” said Johnson.
And those guardrails are? The most obvious measures are basic security principles such as using a single sign-on method and two factor authentication for employee logins, the two CIOs said.
The issue is also primarily a data problem so “the vendor we’re working with has to be able to demonstrate that they have the ability to manage and secure the data” said Johnson.
“Once we’re able to check off those key boxes, the security, compliance and legal teams say ‘okay you’re good now, as long as you play by these rules.’”

But what happens when an employee is found putting who knows what data into an unapproved LLM? Well the reaction depends on the exact offence Flowers told The Stack.
Someone taking company IP and running it through an unauthorised third party is “probably a terminable offence” he said, but “if they’re doing something out of curiosity and … they don’t put company assets at risk, then it’s an educational opportunity."
Waiting for these shadow AI cases to show themselves is probably not great policy though, so Zendesk, which is pushing its own AI Agents to customers, has created a formal “enterprise architecture review board mechanism” to invite employee proposals on AI use cases.
While Flowers admits “I don’t know what I don’t know”, the mechanism appears to be encouraging employees to come forward with ideas rather than cracking on by themselves.
“We have dozens of proposals coming in, and the benefit of this approach for me has been that we were truly crowdsourcing from the creativity and innovation of our employee base using a technology that all of us are just learning how to use in real time,” he said.
Johnson supported a similar policy, telling the event audience that “there are peppered situations where we’ve enabled new pieces of GenAI in our environment because there was a need that wasn’t being met.”
It all goes back to the embrace vs police state approach he said, “and I think that the embrace strategy is really the way to go.”