Uploaded on Jul 29, 2025
Supervisory Statement 1/23 by the PRA sets new standards for Model Risk Management and AI governance in UK banks, effective May 17, 2024. It defines model controls, inventory, validation, and oversight practices to ensure compliance and mitigate emerging risks.
Understanding SS1/23 Principles for Model Risk Management (MRM)
Understanding SS1/23
• Supervisory Statement 1/23 is a regulation by
the PRA on Model Risk Management (MRM) for
banks within the U.K. going into effect May
17th, 2024.
• While the SS1/23 statement covers the full range of
how to govern all models within a firm and defines
what a model is quite broadly, it also explicitly calls
out AI as a sub-principle of the regulation on the Bank
of England’s website.
• This marks a major milestone in the governance of models as well as the use of AI.
• The European Artificial Intelligence Office has also been established within the European
Commission in February of this year to ensure the safety and trustworthiness of AI as it is
used within the EU. The key components of the regulation include:
• Model Identification and Risk Classification: Having a clear definition of what is a
model and maintaining an up to date inventory to manage model risk.
• Controls and Governance: Creating strong structures of governance to support
MRM, with board-level responsibilities and Senior Management Function (SMF)
accountability.
• Model Development and Implementation: Setting standards for the designing,
implementation, and testing of models to ensure that they meet specific testing and
performance criteria.
• Independent Validation and Ongoing Monitoring: Having documented practices in
place for continuous validation and performance assessments to ensure models remain fit
for purpose and adheres to regulatory standards.
• Therefore now, more than ever, is it important to proactively implement proper
governance for the proliferating models within your organization.
Why is this regulation important?
• A survey from The Economist shows that 77% of bankers believe that AI will be a
key differentiator for banks, but at the same time, according to Gartner, 85% of
AI Projects will deliver results.
• Managing a proliferation not just of models, but of the different types of models and
which departments may be generating them (GenAI Chatbots, Fraud Detection, Credit
Reporting, Report Generation Software, IT Operations, Candidate Prospecting, etc.) will
be a critical element of success within banks.
The Regulatory Landscape
• SS1/23 joins a chorus of regulatory requirements and recommendations around the world on
AI and Model Risk Management. This includes:
• CP 6/22: This consultation paper also from the PRA was published on June 21st,
2022 and serves as an earlier outline of the expectations for identifying and
addressing model risk within banks.
• The E.U. AI Act: This legislation passed by the E.U. aims to be a global standard for
explicitly banning A.I. applications that are deemed to have an unacceptable or high risk
such as the use of facial recognition in specific ways. This legislation is less directly
related to banks and model risk management, but could be important to keep an eye out
for.
• The AI Risk Management Framework (U.S.): Released by NIST from the U.S. Department
for Commerce on January 26, 2023, this framework guides organizations on how to govern,
map, and measure risk to the organization.
• SR 11-7 (U.S.): This Supervisory Guidance on model risk management was jointly
developed by the Federal Reserve System as well as the O.C.C. and has been in effect
since 2011.
• The Artificial Intelligence and Data Act (Canada): Sets the expectations for
the use of AI within Canada in order to protect the interests of the public and
require that appropriate measures be put in place to identify, assess, and mitigate
risks of harm or biased output.
Complying with SS1/23
• SS1 provides a comprehensive approach to managing the risk from models and AI that can
be greatly enhanced from having the right technology policy and tools in place to automate
the process and reduce the reliance on human effort. Best practices to avoid costly errors
and get on the path to compliance are:
• Automated Model Identification: Take a model-agnostic approach to identify and risk-
assess EUCs—including Excel files, Python or R models, and third-party executables—
across the organization. This aligns with SS1/23’s definition of a model as any
quantitative method that transforms input data into outputs using statistical, financial, or
mathematical techniques.
• Self Organizing Model Inventory: Automated Model Identification ensures regular
scans to detect hidden risks and keep the Model Inventory updated—firm-wide and
department-specific—aligning with SS1/23 Principle 1.3. Model Tiering applies a
consistent, risk-based rating to each model based on materiality and complexity.
• Powerful, Yet Flexible Risk Assessment: Custom and nuanced risk assessment and
testing treatment groups based on model type: Deep Learning Models, Classification
Models, Regression Models, 3rd Party Executables. This can make testing different classes
of models differently and generating the corresponding documentation and reports
seamless.
• Interdependency Map: A model’s risk depends heavily on the models and data feeding
into it. Hidden risks can arise from inputs to high-impact models. An interdependency
map helps visualize these links and adjust risk scores accordingly. As per SS1/23 Principle
1.2, firms should maintain a firm-wide model inventory to identify direct and indirect
interdependencies and better understand aggregate model risk.
• AI Model Testing & Documentation Generation: Create a comprehensive model
testing suite covering areas like data drift, validity, reliability, fairness, interpretability,
privacy, GenAI usage, security, and code quality. Document results in a standardized,
accessible format. Under SS1/23 Principle 1.3, model tiering should also assess complexity
based on interpretability, explainability, transparency, and risks of bias.
• Comprehensive Documentation Generation & Management: Key qualitative details
(e.g., purpose, owner, impact), recent risk scores, and model testing results are centrally
maintained and regularly updated, supporting SS1/23 Principle 3.5 on thorough model
development documentation.
• 3rd Party Risk Management: Firms must test third-party applications and models to the
same standards as internal ones. This includes identifying AI use, assessing third-party
models and data sources, analyzing security vulnerabilities in libraries, and ensuring
vendor models meet internal model risk management (MRM) expectations, as outlined
in SS1/23 Principle 2.6.
• Proper Controls and Accountability: Firms should control and log who changes models
and when, to enhance security and accountability per SS1/23 Principle 2.3. This also
supports tracking model use frequency and extent, as required by Principle 1.3.
• Approval Workflows: Firms should implement automated approval workflows with alerts
and stage tracking to streamline model approvals, identify bottlenecks, and support
process improvements. As per SS1/23 Principle 2.3, policies must define the model
approval and change process, including clear roles for approval authorities.
About Us
• Established in 1988, CIMCON Software, LLC is a pioneer in end-user
computing and model risk management, serving over 800 companies across
industries.
• Recognized by Gartner, Insurance ERM, and others as a top risk management
vendor, CIMCON brings 25+ years of experience and industry best practices to
support AI & GenAI readiness and governance.
• With the largest global installed base, our feature-rich, extensively tested solutions
offer unmatched depth, support, and reliability.
Contact Us
Boston (Corporate Office)
+1 (978) 692-9868
234 Littleton Road
Westford, MA 01886,
USA
New York
+1 (978) 496 7230
394 Broadway
New York, NY 10013
THANK YOU
Comments