The Resurgence of ‘Shadow IT’ (And How to Handle It)
You probably know this is happening, but whether your company has 10 or 1,000 people, I’m pretty sure your team has actively adopted unsanctioned AI tools almost daily at work. Usually that’s ChatGPT, Claude or Gemini on personal subscriptions - you may even be paying the expense claims.
And if you haven't formally dealt with it yet, you're in good company. Most companies haven't either.
So this isn't new, it’s been an emerging issue for the past 6 months or so, but it is a serious problem that you need to address - and you need to take action sooner rather than later. In this article I’m going to cover why it’s a problem, and how to approach it.
We've Been Here Before (Sort Of)
In the 1990s and early 2000s, we had a version of this exact situation. At the time we called it the “end-user-computing” or "shadow IT" problem.
The dynamic was straightforward: IT owned all computing resources. Everything was centralised, controlled, and slow. So if you needed an IT solution to do your job, you submitted a request, fought against other business owners for priority and waited months for the IT team to allocate resources and build it. So business users got creative. Finance and Sales teams built reporting systems with complex VBA macros in MS Excel, and Marketing leveraged MS Access database to run campaigns whilst Operations were dependent on a home grown complex Lotus Notes setup.
These weren't people out to cause problems. They were merely pragmatic responses to slow official processes and limited resources. People had real work to do, and they found loop holes leveraging the tools available to get it done. The problem compounded over time. By the early 2000s, businesses started to realise that critical business functions were dependent on fragile, undocumented ‘systems’. Systems built by end users with no backup. No version control. No security review. No audit trail. Ultimately an operational risk nightmare with real costs. Spreadsheet errors that took months to catch. Database failures that caused data leaks, compliance failures etc.
Audits of ‘end user computing’ applications revealed in some areas nobody actually knew how key decisions were being made. I vividly remember it took years to untangle, and it was expensive.
We're in a Similar Place Now (But It's Fundamentally Different)
The structure of the problem is the same. The nature of the risk is probably worse.
Your colleagues are doing what they've always done when official tools don't meet their needs: they're finding better alternatives. Except this time, the risk isn't just about access control, backup procedures or who can maintain it. It's about all those plus something much more fundamental: where your confidential information is going, who owns the rights to process your data and can you trust and explain the decisions made using these tools.
Here's what we're hearing from all sorts of companies:
- A sales team member pastes customer conversations into ChatGPT to help draft follow-up emails.
- An operations manager feeds your confidential internal process documentation into Claude to help optimize workflows.
- A recruiter uses a personal ChatGPT account to help evaluate candidate resumes which contain sensitive PII.
- A marketing manager uses Gemini to create images for your latest social media posts.
- A product manager drafts business requirements documents using ChatGPT.
In each case, what’s probably happening is that the AI provider is ingesting that data, transferring it offshore and under the terms you almost certainly didn't read, they may be using it to train their models.
The terms of service might differ by provider and the subscription tier being used, but many free or consumer focused plans explicitly reserve the right to use your data to improve their models.
So when your sales team pastes a customer conversation into free ChatGPT, they're not just sharing it with OpenAI. They're potentially making it available as training data for a model that your competitors can use. That's a competitive risk. It's also a privacy risk if that conversation included customer PII.
And it gets worse in regulated industries. If you're in healthcare, finance, or regulated sectors, the HIPAA, GDPR, or SOX implications are serious. If a customer conversation containing health information or payment details ends up in an LLM training dataset, you've probably violated compliance requirements. You probably won’t even know it happened.
And here's Where It Gets Further Complicated
There's a layer of risk that didn't exist in the 90s shadow IT problem: emerging AI regulations designed to protect consumers demand you be able to demonstrate how decisions are made, and if AI was involved, how it influenced the decision.
The EU's AI Act requires organizations to maintain transparency about AI-driven decisions. GDPR has specific requirements around automated decision-making. The SEC is issuing guidance about disclosure of AI use in financial services. The U.S. is moving toward similar requirements.
The practical implication is this, if your team uses an unsanctioned AI tool to make or inform a business decision — pricing, customer escalation, hiring, risk assessment — you need to be able to show that decision trail. You need to know:
- What data went into the AI?
- What prompt was used?
- What did the model output?
- Who reviewed and approved the decision?
- How was the AI influence disclosed to the affected party?
With most of the tools available there's no robust audit trail, there's no proof. If a regulator or a customer asks "How did you make this decision?" and the real answer is "An AI told us to," without documentation, then you're exposed.
In the original shadow IT era, the risk was operational and financial. You'd discover a broken spreadsheet. It was bad, but it was fixable. Now, the risk is legal and reputational. You could be violating regulations you don't even know about.
The problems are compounded by the pace of change
Something else different from the 90s / 2000’s is the tools themselves are evolving at an insane pace. ChatGPT was dominant early, more recently Claude seems to have taken over, Gemini is competitive too. Average lifespan for models feels like it’s trending to weeks rather than months. And all the providers are trying to lock you into using their models via their tools. So if you’re not careful you risk falling into the trap most people fell into with Cloud-Computing where huge switching costs force vendor-lockin.
The solution? Your approach to AI tools needs to treat model flexibility as a requirement, not a nice-to-have. You shouldn't be locked into one model. You should have the ability to swap between 2–3 models that do similar things, depending on what makes sense at any given time.
This means building your tooling around processes around capabilities (we need strong collaboration features), not specific tools (we use Claude). And it may mean curating a toolkit where multiple tools can do similar things.
What Actually Matters (The Real Issues)
So here's what's actually at stake, for all companies:
- IP and process loss — If a team member builds a valuable workflow or analysis process using a personal account or a tool that doesn’t share that knowledge, and they leave, that knowledge and capability walks out the door with them.
- Data privacy and compliance risk — If your team is feeding confidential customer or operational data into consumer grade AI tools, you may be violating privacy regulations and exposing the company to legal liability.
- Regulatory exposure — If you're using AI to inform business decisions and you can't demonstrate how those decisions were made, you're vulnerable to compliance failures.
- Vendor lock-in — If your processes become dependent on a single AI model, changes to pricing, terms, or availability could force costly rewrites.
- Decision opacity — If decisions are being made with AI input but there's no audit trail or documentation, you can't defend those decisions if questioned.
- Competitive advantage - adopting Generative and Agentic AI is no longer optional, it’s table stakes across all industries. Companies that fail to adopt it safely will fall behind their peers.
These are real risks. They're the kind that sneak up on you, and depending on your use cases by the time you notice, they've become structural and quite possibly catastrophic if not managed.
We've Been Here Before — And We Know How to Do It Better
The good news is companies that managed the shadow IT problem in the 2000s learned how to do this. And the playbook, adapted for AI, actually works.
The key insight from that era is, you can't stop people from seeking out tools that help get work done. So you need to find a safe way to satisfy that need.
20 years ago the solution wasn't to simply ban all shadow systems. It was to acknowledge them, understand them, and build governance around the ones that mattered. Many got replaced with official alternatives. Others got "managed" with governance controls built around them. The key was visibility and intentional decision-making.
Today, it’s imperative that you act quickly and find a way to balance giving your teams access to the sorts of AI tools they’re craving, and yet still ensure governance controls are implemented before critical processes become too dependent on unsanctioned tools.
A Practical Framework for Leaders
If you're a business leader, here's a framework for how to think about this:
Reality Check: Your team wants and will seek out access to better AI tools - the tools are too useful and too accessible. Trying to ban them creates friction and drives adoption further underground. So the question isn't "Do we allow AI tools?" It's "Which tools do we allow, and how do we ensure they're used safely?"
Step 1: Understand What's Actually Happening Before you can build a strategy, you need baseline visibility. Have conversations with your team:
- What AI tools are people currently using?
- What are they using them for?
- What data are they feeding into these tools?
- How critical is that work to your operations?
Your goal is to move from "we assume this is happening" to "we know what this is."
Step 2: Define What Matters for Your Organisation Not all AI tools are equal, and not all risks are equal. A fintech startup has different requirements than a B2B SaaS company, which has different requirements than a Bank.
For your organisation, what actually matters?
- Data privacy and training: Will the vendor use your data to train their models? Do you have contractual protection against this?
- Audit and compliance: Can you see what was processed, by whom, and when? Can you demonstrate decision trails?
- Data residency and security: Where does data sit? Is it encrypted? Can you control access?
- Regulatory fit: Does the tool meet your compliance requirements (GDPR, HIPAA, SOX, etc.)?
- IP ownership: If someone creates something with the tool, does the business own it? What happens if the person leaves? Can multiple users collaborate with that data?
- Vendor flexibility: Can you switch tools if the vendor changes pricing or terms? Are you locked in?
- Reliability and support: Is there an SLA? What's the vendor's track record?
You don't need every tool to excel at every test, but you need to know which tools meet which standards and where your risks are.
Step 3: Curate a Toolkit Once you know your standards, select approved tools that meet them. It might be 2 or more tools. The goal is to give your team what they need whilst managing the risks.
For example, a rapidly growing financial services company we worked with approved:
- Co-pilot Enterprise (for general writing and brainstorming, with data residency controls)
- QuivaWorks (for AI assistants that the whole team could leverage for business critical tasks including handling customer data and compliance-sensitive decisions)
Each was selected for specific use cases. Employees could choose the right tool for the job, but the choice was constrained to vetted options.
Step 4: Communicate Clearly It’s critical to bring your team along on the journey and be receptive to their feedback:
- Explain the "why" - "We're approving these tools because they meet our data protection and compliance standards. Here's what that means for you."
- Make it simple - A one-page guide listing approved tools, what they're good for, and how to get access.
- Be honest about trade-offs - "We can't accept you using the free tier of Tool X because of privacy concerns, but Tool Y does something similar and meets our standards. It costs a bit more, but the protection is worth it."
- Have an exception process - If someone needs a tool outside the approved set, they can request it. Be reasonable.
- Celebrate the wins - showcase the efforts of the trail blazers who are finding new ways to do things better using the AI tools.
Step 5: Support and Manage in Practice Approved tools still need guidelines:
- What data can people put in? (Don't paste PII without anonymising. Don't share customer conversations verbatim. Think before you paste.)
- How are we monitoring usage? (We log access and high-level usage. We're not reading your prompts.)
- What happens if someone violates the policy? (Education first. Escalation if needed.)
And actively support adoption:
- Provide training. Share best practices. Create a place for people to discuss how they're using these tools effectively.
- Track and celebrate wins. When someone uses an approved tool to solve a problem faster or better, share the story.
Next Steps
- Start the conversation. Have a real talk with your team about what tools they're using and what problems they're trying to solve. You'll learn more than you expect.
- Define your standards. What matters most to your organisation? (Privacy? Compliance? Vendor flexibility?) Write it down.
- Pick your tools. Based on your actual needs and standards, select a curated set.
- Communicate clearly. Roll it out simply and honestly.
- Support adoption. Make it easy to do the right thing.
It's not complicated. It just takes a deliberate choice to manage it rather than hope it works out.
Why you might want to consider QuivaWorks
QuivaWorks was built to ease AI adoption across teams, with the assurance of enterprise grade security. It’s loved by end-users for what they’re able to achieve with it, whilst giving you comfort that it’s fit for purpose as a business tool.
- Data privacy and training: When QuivaWorks shares your data with a model provider it’s encrypted and the providers are contractually bound to not use it for training purposes.
- Audit and compliance: QuivaWorks keeps a fully transparent log of all the activity inside the platform - you have access to see what was processed, by whom, and when.
- Data residency and security: QuivaWorks data is encrypted and stored securely inside the fully resilient cloud platform. You have the power to control user access to your account.
- Regulatory fit: QuivaWorks has powerful tools for explaining what instructions were given to your chosen AI model(s) and how information was processed. AI Assistant configuration (choice of model, Instructions, Knowledge, Integrations) is transparent and version controlled - you can trace and restore prior configuration versions, and Assistant Thinking and Tool call steps can be audited/analysed for each chat session.
- IP ownership: Your QuivaWorks account is fully encrypted, and we have no access to your data. You control who can create and administer assistants, and those assistants can be shared across the team meaning you are not exposed if/when an individual user moves on.
- Vendor flexibility: Models and providers are interchangeable on a per assistant basis - removing lock-in and allowing you to stay current as/when new models are released.
- Reliability and support: Different levels of support are available with the different QuivaWorks plans.
Sign up for a free QuivaWorks account here