The Thinking Company

What Is Shadow AI?

Shadow AI is the use of artificial intelligence tools by employees without organizational knowledge, approval, or governance oversight — the AI-era equivalent of shadow IT, but with higher risk velocity because AI tools actively process and learn from the data they receive.

It typically involves staff using consumer AI services — ChatGPT, Claude, Gemini, Midjourney, and others — for work tasks, feeding in company data, client information, and proprietary content without understanding the security, compliance, or intellectual property implications.

Every organization with knowledge workers has shadow AI. Salesforce’s 2025 workforce survey found that 55% of employees using AI at work are doing so without explicit employer approval, and 28% have input confidential company data into consumer AI tools. [Source: Salesforce, 2025] The challenge is not eliminating shadow AI — it is acknowledging its existence and channeling it into a managed AI governance framework. Organizations that ignore shadow AI accumulate risk silently; those that manage it convert unauthorized experimentation into sanctioned AI adoption.

Why Shadow AI Matters for Business Leaders

Shadow AI creates four distinct categories of organizational risk, each compounding over time.

Data security risk is the most immediate. When employees paste confidential client data, financial projections, or proprietary code into consumer AI tools, that data enters third-party systems with terms of service that may permit model training. Cyberhaven’s 2025 data security report found that 11% of data pasted into ChatGPT by enterprise employees was confidential, including source code, customer records, and legal documents. [Source: Cyberhaven, 2025]

Regulatory compliance risk follows directly. Under GDPR, sharing personal data with an AI provider without a data processing agreement constitutes a violation. The EU AI Act adds further exposure: if employees use AI for high-risk decisions (hiring screening, credit assessments) through unsanctioned tools, the organization lacks the documentation and oversight the regulation requires. GDPR fines for AI-related data processing violations exceeded EUR 120 million in 2025 alone. [Source: EDPB Enforcement Tracker, 2025]

Quality and reliability risk is harder to detect but equally damaging. AI-generated outputs used without verification can introduce errors into business decisions, client deliverables, and public communications. A 2025 Deloitte survey found that 34% of organizations had discovered AI-generated content errors in external-facing materials. [Source: Deloitte, 2025]

Intellectual property risk rounds out the picture. Employees who input proprietary methodologies, trade secrets, or unreleased product information into AI tools may be inadvertently exposing IP. Samsung banned employee use of generative AI after discovering that engineers had uploaded semiconductor source code to ChatGPT — a risk that organizations at Stage 1 of the AI maturity model routinely underestimate.

How Shadow AI Works: Key Components

Common Shadow AI Patterns

Shadow AI manifests in predictable patterns across organizations. Marketing teams use generative AI for copy and image creation. Sales teams use AI to draft proposals and analyze prospects. Finance teams use AI to summarize reports and build models. Legal teams use AI for contract review and research. HR teams use AI for job description writing and candidate screening. In each case, employees adopt tools individually because the organization has not provided sanctioned alternatives — or because sanctioned tools are too restrictive. Gartner predicts that by 2027, 75% of employees will have used AI tools acquired outside IT governance. [Source: Gartner, 2025]

Detection Methods

Identifying shadow AI requires a multi-layered approach. Network monitoring tools can detect traffic to known AI service domains. Endpoint security software can flag AI application installations. Cloud access security brokers (CASBs) can identify AI tool usage through SSO and browser extension monitoring. Employee surveys — conducted anonymously — often reveal more shadow AI than technical monitoring because they capture mobile device and personal account usage that corporate tools cannot see.

Root Cause Analysis

Shadow AI is a symptom, not a disease. The root cause is almost always one of three factors: (1) employees see productivity value that the organization has not officially captured, (2) the organization’s official AI tools are too limited or cumbersome, or (3) there is no clear policy, so employees assume personal AI use is acceptable. Addressing shadow AI effectively requires treating the root cause, not just blocking access.

Transition to Managed AI

The goal is not to eliminate AI usage but to convert shadow AI into sanctioned, governed AI adoption. This means providing enterprise-grade AI tools with appropriate security controls, establishing clear acceptable use policies, and creating a fast-track process for evaluating and approving new AI tools that employees request. Organizations that simply block AI tools without providing alternatives see shadow AI migrate to personal devices and accounts — making it invisible rather than absent.

Shadow AI in Practice: Real-World Applications

  • Samsung (Electronics): In early 2023, Samsung discovered that semiconductor engineers had uploaded proprietary chip designs and internal meeting notes to ChatGPT. The company initially banned all generative AI use, then developed an internal AI platform with data security controls. By 2025, the platform served 45,000 employees with enterprise-grade protections while shadow AI incidents dropped by 90%.

  • Citigroup (Financial Services): Citigroup’s 2024 internal audit revealed that 38% of employees across its investment banking division were using unauthorized AI tools for client presentation drafting and financial modeling. The bank responded by deploying an enterprise AI assistant with data loss prevention controls and requiring all AI-generated content in client materials to undergo compliance review. Time-to-production for sanctioned AI tools was reduced from 6 months to 6 weeks.

  • NHS Digital (Healthcare): A 2024 audit of NHS trusts found that clinical and administrative staff were using consumer AI tools to summarize patient notes and draft referral letters — creating GDPR exposure with patient health data. NHS Digital responded with an approved AI platform with BAA-equivalent data protections and mandatory AI literacy training for 50,000 staff. The program reduced unauthorized AI data processing incidents by 72% in the first year.

How to Get Started with Managing Shadow AI

  1. Conduct a shadow AI audit: Use a combination of network monitoring, endpoint analysis, and anonymous employee surveys to map the current shadow AI landscape. The goal is visibility, not punishment — frame the audit as understanding how employees are already gaining value from AI.

  2. Quantify the risk: For each shadow AI use case discovered, assess the data sensitivity, regulatory exposure, and business impact if the usage became public. Prioritize action on high-risk patterns (client data, personal data, proprietary IP) while leaving low-risk usage for later governance.

  3. Establish an acceptable use policy: Define clear, simple rules covering which AI tools are approved, what data classifications can be input, and how AI outputs must be reviewed. Root the policy in your broader AI strategy and ethical principles. Publish widely and train all employees.

  4. Provide sanctioned alternatives: For every shadow AI pattern you restrict, offer a governed replacement. If marketing uses ChatGPT for copy, provide an enterprise AI writing tool. If developers use Copilot without approval, procure it through IT with proper configurations. The fastest way to reduce shadow AI is to make sanctioned AI easier to use.

At The Thinking Company, we help organizations transition from shadow AI to governed AI adoption. Our AI Diagnostic (EUR 15-25K) includes a shadow AI assessment that maps unauthorized usage, quantifies risk exposure, and builds a transition roadmap to enterprise-grade AI deployment.


Frequently Asked Questions

Is shadow AI the same as shadow IT?

Shadow AI is a subset of shadow IT, but with distinct and amplified risks. Traditional shadow IT — using unapproved SaaS tools or personal devices — creates data residency and security concerns. Shadow AI compounds those risks because AI tools actively process, analyze, and potentially retain the data employees input. A shadow IT tool stores your data; a shadow AI tool learns from it. The risk velocity is higher because a single prompt containing sensitive data can have irreversible exposure consequences.

How widespread is shadow AI in enterprises?

Research consistently shows that shadow AI is pervasive. Microsoft’s 2025 Work Trend Index found that 78% of knowledge workers use AI at work, but only 34% use organization-approved tools exclusively. Salesforce’s data indicates 55% of enterprise AI usage is unapproved. The pattern holds across industries, though regulated sectors (financial services, healthcare) report marginally lower rates due to stronger data handling awareness. Assume your organization has shadow AI — the question is how much and how risky.

Should companies ban AI tools to prevent shadow AI?

Banning AI tools is almost universally counterproductive. It drives usage underground — employees switch to personal devices and accounts, making shadow AI invisible instead of absent. The more effective approach is to provide sanctioned alternatives with proper security controls, establish clear usage policies, and invest in AI literacy training. Companies that ban AI tools lose the productivity benefits while retaining the risks. Companies that govern AI tools capture both the value and the control.


Last updated 2026-03-11. For a complete framework for managing AI risk including shadow AI, see our AI Governance Framework pillar page.