Uploaded on Jun 2, 2025
Shadow AI refers to the unsanctioned use of AI tools or models within organizations, often without IT’s knowledge. As AI adoption grows, so do the risks of internal misuse and hidden AI in third-party apps—making oversight critical for businesses.
Shadow AI: What is it and How to Manage the Risk from it?
About Us
• Established in 1988, CIMCON Software, LLC is a pioneer in end-user
computing and model risk management, serving over 800 companies
across industries. Recognized by Gartner, Insurance ERM, and others as a
top risk management vendor, CIMCON brings 25+ years of experience and
industry best practices to support AI & GenAI readiness and governance.
With the largest global installed base, our feature-rich, extensively tested
solutions offer unmatched depth, support, and reliability.
What is it?
• Shadow AI refers to the use of AI Applications or Models being used within an
organization without the explicit consent or knowledge of a firm’s IT organization. There
are normally two categories of concerns when it comes to Shadow AI:
Internal use of Shadow AI: Leveraging AI for use on internally built models or
applications: using GenAI to write code, get answers to questions, etc. or
building internally used AI tools without the knowledge of IT.
AI in 3rd party applications in models: Having AI within 3rd party applications or
models upon installation or update without the knowledge of the firm using the
application.
• The identification and mitigation of shadow AI within either use case is a matter of
increasing concern and importance to firms everywhere as the use of AI proliferates
across industries and within organizations.
Why is it important?
• According to McKinsey, AI adoption within the financial services industry has
grown by 2.5x from 2017 to 2022 and will no doubt continue to increase.
• As the use cases for which AI is used spread, so will the risk associated with it. The
reason that AI is so high risk is because its outputs can be much more difficult to
predict and understand and as AI accelerates and improves this problem will only be
exacerbated.
• The cost and complexity of AI models can also scale exponentially. For example,
experts estimate that the GPT model created by OpenAI costs about $1 million
dollars a day and that in upgrading from GPT 3 to GPT 4, the number of parameters
scaled from one billion to 100 billion.
• This illustrates just how complex AI can be and how quickly that complexity can grow.
Generative AI has similarly been seen to have higher rates of hallucination than
originally suspected and made some embarrassing high profile errors.
• According to The Economist, 77%
of bankers report that AI will be the
key differentiator between winning
and losing banks so avoiding the
use of AI is not impossible.
• The prevalence of shadow AI shows
that even if you want to avoid it,
keeping members of your
organization from adopting it or tools
you leverage from 3rd party vendors
can be even more difficult.
Regulatory Landscape
• The regulatory landscape for managing shadow AI in third-party applications is
rapidly evolving. Senior Management Functions (SMFs) are increasingly held
accountable for identifying and testing all AI models—including third-party
ones—against internal standards. As a result, identifying and mitigating
shadow AI risks is becoming critical. Relevant regulations include:
SS 1/23: This Supervisory Statement from the PRA goes into effect
May 17th and sets the expectations for banks and financial firms
that operate within the UK. SS1/23 Principle 2.6 Use of externally
developed models, third-party vendor products. Firms should:(i)
satisfy themselves that the vendor models have been validated to
the same standards as their own internal MRM expectations.
The AI Risk Management Framework (U.S.): Released by NIST from the U.S.
Department for Commerce on January 26, 2023, this framework guides organizations
on how to govern, map, and measure risk to the organization, including 3rd party
shadow AI risk. NIST GOVERN 6.1: Policies and procedures are in place that address
AI risks associated with third-party entities, including risks of in- fringement of a
third-party’s intellectual property or other rights.
The E.U. AI Act: This legislation passed by the E.U. more broadly regulates the use
of AI within firms that may directly impact the safety and well being of the public and
holds firms accountable for errors or poor practices that lead to public harm.
The Artificial Intelligence and Data Act (Canada): Sets the
expectations for the use of AI within Canada in order to protect the
interests of the public and require that appropriate measures be put in
place to identify, assess, and mitigate risks of harm or biased output. 3rd
party vendors that pose a risk to creating bias or harm within models are
likely included within the risk mentioned within the regulation.
Mitigating the Risk from Shadow AI
• There are many ways to address the risk from Shadow AI. Below are the practices that can
help:
Identifying the internal use of GenAI: EUCs and Models can be generated
using GenAI that can then leak into the public sphere or hallucinate and produce
errors and so testing specific Models and EUCs to see what the probability of
GenAI use is can be helpful.
Identifying AI Models within 3rd Party Applications: Monitoring the behavior
of 3rd party tools and executables and looking for patterns that may be indicative
of the use of AI can be a necessary way to identify hidden risk of shadow AI.
Consistent scheduled scans to identify and look for this risk can be a great way to
mitigate this risk.
Interdependency Map: A model’s level of risk is highly dependent on the models
and data sources that serve as inputs to that model. With an interdependency map,
you can easily visualize these relationships and interdependencies. Paying special
attention to 3rd Party Models that feed into high impact models can help prioritize
where to look for shadow AI.
Security Vulnerabilities: Even if firms are aware of the use of AI within a 3rd
party, it can be important to automate checks for security vulnerabilities within AI
3rd party libraries.
Monitor 3rd Party Model Performance: Many of these 3rd party models
are black boxes and here the risk of shadow AI is highest as firms do not
know what techniques a 3rd party vendor is using. Monitoring 3rd party
models for sudden changes in performance can be an indicator for the use
of shadow AI.
AI Testing Validation Suite: Have a comprehensive testing suite for models that
can similarly pick up strange behavior that can indicate the use of shadow AI. An
effective testing suite to control for this could include: Data Drift, Validity &
Reliability, Fairness, Interpretability, Code Quality among many others. The results of
these tests should be consistently documented in a standardized and easy to follow
wPraoy.per Controls, Workflows, and Accountability: Helping control the use of
shadow AI on internally developed tools can be a function of controlling who has
access to what EUCs and Models. This can be done through an Audit Trail which also
tracks who makes changes to what models as well as through Approval Workflows
which can provide accountability for who approved models that were behaving
suspiciously.
Effective Management of Shadow AI
• Shadow AI is already a major problem for firms and organizations and it’s
only going to get worse as AI spreads. The greatest risk of Shadow AI is
that you don’t know it’s a problem until you have the proper tools to
identify and mitigate it.
• Managing Shadow AI is essential to firms not just because of regulatory pressure, but
the overall increase in the risk of errors that can be quite costly to firms.
• Leveraging tools that have a long history of being battle tested and a team
with over 25 years of experience is the best way to get a handle on this
issue and be proactive about solving issues before they arise.
AI Risk Management Framework
• Explore the realm of Artificial Intelligence (AI) with our AI Risk
Management Policy. This concise guide covers the spectrum of AI models,
including supervised, unsupervised, and deep learning, and emphasizes
making AI trustworthy based on the NIST AI Risk Management Framework.
• Learn to assess and manage AI Risk, cultivate a culture of risk awareness,
and utilize periodic testing with tools like ours. This policy is your essential
toolkit for responsible and effective AI utilization in your organization.
Contact Us
Boston (Corporate
Offi+c1e )(978) 692-9868
234 Littleton Road
Westford, MA 01886,
NewUS YAork
+1 (978) 496 7230
394 Broadway
New York, NY 10013
THAN
K
You
Comments