Employee curiosity fuels Shadow AI adoption faster than IT can keep up

Remember Shadow IT? The term emerged around 2010 as cloud and mobile apps proliferated and employees, fed up with clunky enterprise tools, began seeking alternatives they could access on their own. These applications weren't just easy to access, more importantly, they were easy to use. People wanted work tools that were as simple and intuitive as the ones they used at home.
Today, we have a similar phenomenon at play: the rise of Shadow AI. To understand just how widespread this has become, ManageEngine, a division of Zoho that handles IT operations, recently commissioned a survey of 700 people in North America and Canada. They spoke to 350 IT executives and 350 end users to understand the extent of the problem.
Shadow AI is defined as unauthorized use of AI, but according to the report, it’s often indicative of deeper problems inside organizations. “Shadow AI is more than just unsanctioned software use. It is a reflection of systemic gaps in governance, communication, and security culture.”
How big is the problem?
The data is pretty telling here. Two-thirds of respondents on the management side see data leakage as a big problem. Maybe more should because 93% of users admitted to inputting company data into unsanctioned tools at work. The fact is that when tools have been checked and approved by IT, data leakage is much less likely.
So why is this happening? With so many directives to be AI-first, many employees are taking this to heart, partly out of genuine curiosity and partly out of concern for their future employment. It’s no wonder that employees are trying things on their own, says Sujatha Iyer, head of AI security at ManageEngine.
“In the era of AI, where everyone is chasing productivity, employees see AI as helpful for getting their work done faster,” she said. She added that the inherent risk here is data leakage and compliance issues.
The education gap
Part of the problem is that employees simply don’t understand the risk of using unauthorized tools. In fact, 90% of respondents said that they trust unauthorized tools to protect their data, a pretty startling statistic. Meanwhile half of employee respondents didn’t see any risk in using unapproved AI software.
That’s a difficult combination, and it suggests that IT leaders are not communicating about the risks clearly enough to employees. To solve that requires training and understanding the real risk in using unvetted tools.
While IT sees it as a policy and workflow problem, employees say that’s only part of it with 60% of respondents indicating they needed better education on the risks.
Don’t shut it down
The last thing you want to do is quash curiosity about these tools. As Assaf Keren, CISO at Qualtrics told me in a FastForward interview earlier this year, ignorant employees are the real risk. “When we say, ‘hey, thou shall not access AI technology because we're afraid of the risk,’ we're basically impacting the security of the entire organization because we're creating employees that are not as knowledgeable,” he said.
The report comes to a similar conclusion, “Shadow AI isn't merely evidence of employee recklessness, it's proof of workplace innovation culture. The organizations that will lead the AI economy are those that harness this initiative rather than suppress it,” it said.
The scope of this challenge becomes clearer when you consider that 85% of IT leaders responded that they agreed that employees are adopting AI faster than IT teams can deal with it. Iyer certainly recognizes that. “This mix of high demand, limited clarity around risks and the time needed to carefully evaluate and approve AI tools is what keeps shadow AI growing inside organizations,” she said.
In spite of that, one answer might be to give access to as many tools as you can across a broad set of categories. That’s the approach Ericsson CTO Erik Ekudden takes. As a telecom company, his organization needs to be rock solid when it comes to security, so he tries to provide a set of tools that lets him say yes more than no.
“In general, there is an inclination for us to be very careful. Of course, that's a good thing, but you also want to move fast. You want to make sure that you actually lean in and use the tools,” he said. “I think the way for us is actually a combination of creating safe spaces with sandboxes and actually a very large set of vetted tools. So it's not about restricting, it's about opening up.” Admittedly, not every company will have the resources to do this, but building a system for experimentation and approval could help reduce Shadow Ai overall.
Another way to solve it is having very strict rules about what’s allowed and what’s not, so people understand the policies clearly. As Shopify CISO Andrew Dunbar told me in a February FastForward interview, he runs a very tight ship and there’s not a lot of room for experimenting.
“We're very opinionated when it comes to tooling, and we keep a very tight circle of trusted vendors where we only allow one particular way to solve a particular problem. This is not a bring your own tech kind of environment,” he said.
Each company is going to have a different approach and what works for one doesn't work for everyone. As Iyer told me, "There isn't a one-solution-fits-all approach to eliminate Shadow AI, but the best path forward is for organizations to build transparent, collaborative and secure AI ecosystems that employees feel confident using."