Uploaded on Jun 18, 2025
Shadow AI refers to the unmonitored use of AI within organizations, bypassing IT or compliance oversight. Risks include hidden internal models, embedded AI in third-party apps, and unauthorized use of generative AI tools—posing threats to data privacy, governance, and model oversight.
Managing the Risks of Shadow AI
What is Shadow AI
Shadow AI is the use of AI within an organization
without the knowledge of or oversight from the IT
or Compliance Department. Within Shadow AI,
there are two categories of concern that can be
particularly relevant:
• Hidden Internally Built Models: Updating existing models for
use cases such as automation internally or external interactions
with customers in undetected ways (updating a credit risk model
using an OpenAI model).
• AI in 3rd party applications in models: Having AI within 3rd
party applications or models upon installation or update without the
knowledge of the firm using the application.
• Unauthorized Internal Use of 3rd Party AI Tools: Using Generative
AI tools such as Chat GPT to write code, get answers to questions, etc.
without being careful about data & privacy leaks, sharing proprietary
code, or lack of proper governance.
Why is Shadow AI so Dangerous?
What makes these models especially dangerous is that firm’s don’t know what
they don’t know, making addressing this risk particularly difficult without
consistent and robust monitoring systems. Here are some specific examples of
when Shadow AI posed a significant risk to organizations:
• Data Breaches: A notable e-commerce company suffered a significant
data breach when an employee made use of an unauthorized AI product
in order to optimize customer data analysis. This tool lacked the proper
security measures and this resulted in the leak of customer information.
• Biased Decision Making: An unapproved AI algorithm was found to be
biased against certain demographic groups, which led to unfair lending
practices. This resulted in a significant regulatory penalty to the
organization.
• Operational Failure: Major production delays and losses were incurred
to a manufacturing company when an AI system without proper
monitoring started making error-prone predictions for predictive
equipment maintenance.
Regulatory Landscape
In addition, these models specifically fall under the supervision of several current
& likely upcoming regulations and therefore pose additional risk of regulatory
penalties.
• SS 1/23: This Supervisory Statement from the PRA goes into effect May 17th and sets
the expectations for banks and financial firms that operate within the UK. SS1/23
Principle 2.6 Use of externally developed models, third-party vendor products. Firms
should:(i) satisfy themselves that the vendor models have been validated to the same
standards as their own internal MRM expectations.
• The AI Risk Management Framework (U.S.): Released by NIST from the U.S.
Department for Commerce on January 26, 2023, this framework guides organizations on
how to govern, map, and measure risk to the organization, including 3rd party shadow AI
risk. NIST GOVERN 6.1: Policies and procedures are in place that address AI risks
associated with third-party entities, including risks of in- fringement of a third-party’s
intellectual property or other rights.
• The E.U. AI Act: This legislation passed by the E.U. more broadly regulates the use of AI
within firms that may directly impact the safety and well being of the public and holds
firms accountable for errors or poor practices that lead to public harm.
• The Artificial Intelligence and Data Act (Canada): Sets the expectations for the use
of AI within Canada in order to protect the interests of the public and require that
appropriate measures be put in place to identify, assess, and mitigate risks of harm or
biased output. 3rd party vendors that pose a risk to creating bias or harm within models
are likely included within the risk mentioned within the regulation.
Hidden Use of AI or GenAI within an
OUndrergstaandningi zthea utsei oof AnI internally without the proper knowledge or oversight of the
appropriate teams is a significant step in addressing the risk from Shadow AI. The following
are strategies to mitigate this risk:
• Consistent Monitoring for Undetected AI Models: Periodically scheduled scans
that detect the probability of the use of AI within Models & EUCs can uncover risks
before they result in errors and help meet regulatory requirements.
• Comprehensive AI Testing Suite: Implementing a comprehensive AI testing suite
is crucial for detecting and controlling Shadow AI. This suite should include tests for
data drift, validity, reliability, fairness, interpretability, and code quality. Consistent
documentation of test results in a standardized format helps maintain transparency
and accountability for AI models that are detected.
• Large Language Models (LLMs) Vulnerability Testing: Testing LLMs for vulnerabilities
such as bias, fairness, harmful content generation, and revealing sensitive information helps
stress test a model before it’s used by customers.
• Explainable Large Language Models (LLMs): Content Attribution can help explain
where within internal data sources the responses for prompts are coming from, helping to
identify and mitigate causes of errors or the dissemination of incorrect information.
• LLM Hallucination Testing: New research suggests that hallucination rates for LLMs
may be higher than initially expected. As competitors race to adopt this technology and
leverage it to enhance the customer experience, it can be critical to leverage the latest
developments in RAG models and Challenger LLMs to monitor rates of LLMs giving
customers incorrect information, or Hallucination Rates.
• Implementing Controls and Accountability Measures: Controlling the use of Shadow
AI involves managing access to End User Computing (EUC) models and tools.
Implementing an Audit Trail to track model changes and Approval Workflows to ensure
accountability can help mitigate risks associated with Shadow AI.
Identifying AI in 3rd Party Applications
AI adoption in financial services has surged 2.5x from 2017 to 2022, per McKinsey, and
continues to rise. As third-party vendors increasingly embrace AI, the risk of its use without
firms' knowledge also grows. Key mitigation strategies include:
• Identifying AI Models within 3rd Party Applications: Monitoring the behavior
of 3rd party tools and executables and looking for patterns that may be indicative of
the use of AI can be a necessary way to identify hidden risk of shadow AI. Consistent
scheduled scans to identify and look for this risk can be a great way to mitigate this
risk.
• Interdependency Map: A model’s level of risk is highly dependent on the models and data
sources that serve as inputs to that model. With an interdependency map, you can easily
visualize these relationships and interdependencies. Paying special attention to 3rd Party
Models that feed into high impact models can help prioritize where to look for shadow AI.
• Security Vulnerabilities: Even if firms are aware of the use of AI within a 3rd party, it
can be important to automate checks for security vulnerabilities within AI 3rd party
libraries.
• Monitor 3rd Party Model Performance: Many of these 3rd party models are black
boxes and here the risk of shadow AI is highest as firms do not know what techniques a
3rd party vendor is using. Monitoring 3rd party models for sudden changes in performance
can be an indicator for the use of shadow AI.
Monitoring Improper Use of 3rd Party
GUnreegunlaetedr raeliatnicve oen t oAolsI such as Chat GPT or Microsoft Co-Pilot can lead to
accidents such as how, at a major technology company, code was leaked to Open
AI through the use of the company’s LLMS. Effective Risk Management of this
unauthorized use could involve the following:
• GenAI Detection Reporting: Scanning your landscape of
EUCs and Models through cutting-edge AI detection algorithms
can help get a better sense of the overall risk profile of your
ecosystem in terms of inadvertent uploads into AI generators.
• Securing Proprietary Code: Within code repositories, flagging the
use of Generative AI can help uncover risks of leaking proprietary code
to 3rd parties.
• Flagging Hallucination: Running reports through AI detection can
help identify documents that might suffer from errors due to
hallucination from LLMs.
• Demonstrating Governance and Compliance: Regulations such as The EU
AI Act and SS1/23 are just the start among regulations requiring the
documentation and enforcement of policy regarding the internal use of
Generative AI within an organization.
Effective Shadow AI Risk Management
According to The Economist, 77% of bankers report that AI will be the key
differentiator between winning and losing banks so avoiding the use of AI is not
impossible.
Shadow AI can be a tough challenge for organizations to face, but with the right
level of proactive monitoring, firms can unleash the massive benefits of AI, and
especially GenAI, while limiting the risk.
This involves effectively monitoring the risk from hidden AI models
being used within the organization, the AI within 3rd party
applications, and the submission of information into 3rd party AI
Generators.
About Us
• Established in 1988, CIMCON Software, LLC is a pioneer in end-user
computing and model risk management, serving over 800 companies across
industries.
• Recognized by Gartner, Insurance ERM, and others as a top risk management
vendor, CIMCON brings 25+ years of experience and industry best practices to
support AI & GenAI readiness and governance.
• With the largest global installed base, our feature-rich, extensively tested solutions
offer unmatched depth, support, and reliability.
Contact Us
Boston (Corporate
Offi+c1e )(978) 692-9868
234 Littleton Road
Westford, MA 01886,
NewUS YAork
+1 (978) 496 7230
394 Broadway
New York, NY 10013
Comments