Most organisations treat Shadow AI as a problem to suppress: a compliance failure, a governance gap, a risk to be eliminated. The language gives it away. Shadow. Rogue. Unauthorised. Something that needs stamping out (a common old-school IT approach).
But the reality is that ‘Shadow’ AI is also an asset. It is a fast, honest signal to an organisation about how staff are actually using AI to get their work done. It can reveal what teams need, where processes are broken (or at least clunky), and which parts of your teams are organically developing the capabilities to make the most of AI tools. Try to suppress it and you will most likely fail and just drive it deeper underground, while losing these valuable signals. You gain nothing except the illusion of control.
What does Shadow AI actually look like?
The term sounds dramatic, but the reality is mostly mundane. Shadow AI is not secret automation projects but the everyday, largely unobserved, use of AI tools by staff trying to do their jobs better: drafting and rewriting reports; summarising long documents; cleaning messy datasets; building presentations; vibe coding; transcription and translation; dealing with overloaded email inboxes etc.
Perhaps most underrated and undervalued – using AI as a thinking partner to help them test ideas and work through problems.
None of this is exotic or scary. Most of it is invisible to management and IT.
Why suppression backfires
Shadow AI cannot realistically be prevented. This is not a provocation; it is an observation about how technology adoption works. Block tools on corporate devices, and staff use personal devices. Restrict accounts, and they use personal accounts. Mandate approval processes, and they route around them.
I saw the same pattern at Cisco in the 90s: Corporate IT tried to restrict all teams to only use authorised tools. The result – every department had a server under a desk running tools it needed to do its work; eventually leadership moved to a model of visibility and management rather than attempted total suppression. The dynamics with AI are not fundamentally different, just faster and more distributed.
The problem with suppression is not that it fails to stop use. It is that it shifts use to places where the organisation has no visibility. Risks that might have been observable and addressable become hidden.
More worryingly – the invaluable learning that might have been captured and shared becomes isolated and fragmented and of little use to anyone.
A categorisation error makes the risk seem bigger than it is
It helps to be precise about where Shadow AI sits in an organisation. One way to think about this is through a simple framework showing a hierarchy of AI engagement that I have been working on (more on this to follow, sign up to my newsletter to get notified).
- AI Users: the majority of staff who will need to interact with AI tools to assist their existing work.
- AI Professionals or Integrators: people who configure, customise, or embed AI into workflows and systems.
- AI Strategists or Leaders: those setting direction, allocating resources, making governance decisions.
- AI Developers: those building models or applications from scratch.
The fear around Shadow AI at times portrays it an organisation-wide Leadership or rogue Developer level issue when in reality it is predominantly an AI User phenomenon. It is about staff using readily available tools to do their jobs.
As an AI User issue, in reality this is a different kind of phenomenon to the ‘server under a desk’ – a better analogy is the early spread of spreadsheets. Organisations did not solve spreadsheet “risk” by banning Excel – they managed it by accepting widespread use, developing competencies, building some guardrails around high-risk applications, and tolerating messiness elsewhere. Shadow AI may require a similar evolution.
This matters because it means applying higher-level governance frameworks to Shadow AI is a category mismatch. Treating individual use of a chatbot the same way you would treat procurement of an enterprise system is not just disproportionate; it actively discourages the visibility you need.
What Shadow AI tells you
If Shadow AI is made visible rather than suppressed, it becomes a diagnostic. It shows where capability development is already happening. It reveals which tools staff find useful and which tasks they are applying them to. It surfaces unmet needs that official systems or training are not addressing.
It also reveals organisational gaps. If staff are using AI extensively for summarisation, that may signal that internal documents are too long or poorly structured. If they are using it for basic data cleaning, that may indicate that existing data systems are inadequate. If they are using it to write external communications, that may suggest processes or templates are not fit for purpose.
Above all, it could be helping you to define your internal AI competency needs and staff training programs.
Competence before compliance
Tiago Peixoto highlights this same problem in his article How to Guarantee AI Failure: A Field Guide for the Well-Meaning Senior Official, describing what he calls the AI Golem Effect and safety sludge:
“When training and policy lead with fear, they lower expectations, reduce experimentation, and suppress exactly the kind of competence development that would make AI use safer and more effective. They create the conditions for bad outcomes by discouraging the learning that would prevent them . . . friction introduced in the name of risk reduction actually increases risk by driving use underground or creating alert fatigue. Repeated warnings lose their effect. Approval processes that feel disproportionate get routed around. The organisation ends up with the worst of both worlds: friction that annoys users without actually improving safety, and shadow use it cannot see.”
The alternative he suggests is competence before compliance. Build understanding first. Help people develop good judgement. Then layer in guardrails where they are genuinely needed. This reduces actual risk more effectively than blanket restrictions, and it does so without sacrificing visibility.
Guardrails that work
None of this means guardrails are unnecessary. Shadow AI carries real risks: privacy exposure, misinformation, misuse, the creation of fragile undocumented systems. These are context-dependent, varying by data sensitivity, task type, and organisational setting. While the concerns are real – client data pasted into public models, hallucinated content published externally, undocumented automations that break silently – the answer isn’t to hide from the reality but to make usage visible enough that you can actually address them.
The question is not whether to have guardrails but how to design them. The goal is minimal friction for low-risk use and targeted controls for high-risk applications. Most everyday use (drafting, summarising, thinking out loud) carries limited risk and needs limited governance. High-risk applications (processing sensitive data, generating public-facing content, automating decisions) clearly need more oversight.
Calibrating this requires knowing what is actually happening. Which brings us back to visibility. If use is hidden, you cannot distinguish high-risk from low-risk. You end up either under-governing everything or over-governing everything. Neither is good.
Is there really a choice?
Shadow AI is an asset masquerading as a problem. Attempting to prevent or restrict use is unlikely to work; while ensuring that AI use is visible, and accepting that it is happening may help you learn from it, build competence alongside compliance and target your guardrails where they really matter.
The organisations that will get the most from AI are not the ones that suppress its use most effectively but those that learn from what their staff using it can tell them.
Shadow AI isn’t a threat, but invisible and hidden AI use may well be.