Sponsored By

Workers' Use of Shadow AI Presents Compliance, Reputational RisksWorkers' Use of Shadow AI Presents Compliance, Reputational Risks

Amid the spread of AI in the workplace, a new problem is emerging: many workers will use their preferred tools -- whether or not IT has cleared the tools first.

Terri Coles

July 16, 2025

5 Min Read

Amid the growing adoption of AI tools, a new problem is emerging – when employers don’t provide sanctioned AI tools, many workers will use the ones they prefer anyway.

In sectors like healthcare, manufacturing, and financial services, the use of so-called shadow AI tools in the workplace surged more than 200% year over year, according to CX Trends 2025 from Zendesk. Among enterprise employees who use generative AI at work, nearly seven in ten access public GenAI assistants like ChatGPT, a Telus Digital survey found. And Ivanti research showed 46% of office workers – including IT professionals who understand the risks – use AI tools their employers didn’t provide.

“Given that AI isn't going away, companies need to approach adoption proactively rather than reactively,” said Daniel Spicer, Ivanti’s VP Security and Chief Security Officer. For Spicer, that looks like ensuring Ivanti employees use AI securely thanks to guardrails preventing potential security and privacy issues and maintaining an open dialogue about AI's benefits and risks so technology enhances their work experience, not hinders it.

AI’s transformative effect on workplaces can push improvements in productivity, but when those gains come via shadow AI tools and devices, companies are exposed to significant security risks.

Related:Front’s Topics, Agent Put AI-Powered Automation Front and Center

How Shadow AI Makes Organizations Vulnerable

Parallel to shadow IT, shadow AI is the use of unapproved or unsanctioned artificial intelligence tools. Examples include using Microsoft Copilot with a personal account on a work device or entering company data into a public version of ChatGPT, and the unsanctioned AI tools run the gamut, including copilots, agents, workflow automaters, chatbots and generative applications, Zendesk found.

Though shadow AI tools could boost productivity or satisfaction for workers adopting them, their use has considerable organizational downsides.

Perhaps most significantly, shadow AI tools directly threaten data privacy and security. Inputting data like customer information or financial records into public AI tools can trigger violations of regulations like GDPR or HIPAA or leak proprietary data, said , a partner at Womble Bond Dickinson who focuses on privacy, cybersecurity, and consumer protection.

The compliance and reputational risks of unauthorized use of third-party AI applications are especially acute in highly regulated sectors like finance and healthcare, Kessler said. “Without robust training on proper usage and data best practices, well-meaning workers can easily violate compliance rules or compromise private information,” she warned.

Related:No Jitter Roll: Zoom Releases New AI Companion Capabilities and Add-on Availability

But these aren’t the only security concerns with shadow AI. Any unapproved software can introduce network or system vulnerabilities, and AI tools are no exception. “Shadow AI usage can result in malicious code, phishing attempts, or other security breaches going undetected until it is too late,” Kessler said.

AI tools also present important considerations around confidentiality and data use. “Most of the shadow AI used is based on a free version of the AI tool, which will likely have very permissive terms governing the use of the inputs and outputs, allowing for that data to be used to train or improve the models,” Kessler said.

These permissive terms for accessing the data can be a problem for any company, she said, but are critical for those managing large sets of confidential or sensitive data – inputs that could then be used for secondary purposes, often without the company’s knowledge.

At Ivanti, established policies and procedures allow employees to leverage AI with appropriate safeguards around company data, Spicer said. This approach includes both a clear pathway for employees to submit an AI tool for consideration for company use and a proprietary AI tool for employee use that relies solely on Ivanti data and meets strict privacy and security requirements.

Related:How Model Context Protocol Boosts AI Agent Workflows

Additionally, AI tools are increasingly powerful, but they are not perfect. Overreliance on their use – or adoption of tools that aren’t the best fit for organizational needs – can result in outputs that are inaccurate, lacking context, out of date, or inconsistent. And a lack of sanctioned and thoughtfully integrated AI tools means it’s likely people at a given organization are using different public tools, sometimes for the same purpose. The result could be inconsistent information and a worse experience for customers or clients.

How to Deter Shadow AI Use

Shadow AI isn’t simply an issue of a lax tech environment or gaps in IT security – it’s a sign employees have a need going unmet by existing AI tools and policies. “Whether for genAI or other tools, shadow IT is the result of not having a defined and reasonable way to test tools or get work done,” Spicer said.

The solution is to enable employees to be active partners in developing AI governance by fostering open workplace dialogue about AI use and tools; this allows workers to discuss which tools help them succeed and lets IT share how to use AI tools safely, Spicer says. Research, surveys, audits, and formal and informal conversations can reveal why these tools are popular with workers and provide information to inform the selection of suitable company-approved alternatives.

Spicer recommends a risk-first approach to AI adoption that focuses on the data that goes into the AI and how the company handles that data. He considers this approach essential because generative AI tools can be both a productivity multiplier and a risk to data security and shadow IT. “This approach is similar to vendor risk management, allowing organizations to leverage established practices and processes, just adjusted for AI-focused questions,” he said.

This official adoption and integration of AI copilots is one way to combat shadow AI use – workers who can’t use a sanctioned copilot may turn to an external tool. Zendesk advises prioritizing AI copilot integration to help agents maintain privacy and security while realizing the benefits of copilot tools. Many popular public AI tools offer private or organization-specific versions of their service, allowing the use of a tool employees already like in a secure, closed environment.

And post-adoption, it’s equally important to ensure a chosen tool is integrated effectively, Spicer said – and that the company understands which data it can access and for what purposes. Ivanti’s approach involved a dedicated team running controlled tests of gen AI tools in specific teams, complete with feedback loops and gradual rollout.

“It's not just about jumping on the AI bandwagon, it's about knowing if it's worth it – for the business and for the people using it,” Spicer said.

“Employees are more likely to follow best practices when the approved tool is effective, user-friendly, and official policy mandates its use,” Kessler said.

Read more about:

Shadow AI

About the Author

Terri Coles

Terri Coles is a freelance reporter. She has been reporting on technology since 2007, when she covered the release of the first iPhone model for Reuters. Since then, her bylines have appeared in publications across the United States and Canada, including The Globe and Mail, CBC News, Yahoo Canada News, Huffington Post, and Informa publications like Data Center Knowledge and ITPro Today. She is always excited to dig into a new beat but especially enjoys covering artificial intelligence, technology trends, cloud computing, and workplace technologies.

You May Also Like