Regulatory Updates Newsletter : September 2025
- Staff Correspondent
- Sep 30
- 7 min read
Welcome to the September 2025 edition of our regulatory newsletter, highlighting major financial regulation developments worldwide. This month we lead with the EU’s progress on implementing the updated Basel III banking rules, followed by insights into Hong Kong’s AI explainability initiative (Project Noor), Canada’s new model risk management guideline (OSFI E-23), and the UK’s plans to streamline bank reporting templates.
We conclude with a look at a new BIS Financial Stability Institute paper on AI explainability. Each piece below is drawn from official sources. A summary table of additional regulatory news from other jurisdictions follows.
European Commission Advances Basel III Implementation
The European Union is on track with its Basel III implementation, updating capital rules for credit, market, and operational risk to align with global standards. Under the 2024 Banking Package, most Basel III standards (often called “Basel 3.1”) took effect on January 1, 2025. In particular, the EU revised the standardized approach for credit risk to add new real-estate risk categories and tightened corporate SME credit risk weights.
For market risk, the Fundamental Review of the Trading Book (FRTB) requirements were included, but the European Commission has proposed (via a Delegated Act) to postpone the FRTB implementation by one year to January 2027 to mitigate transitional impacts. Operational risk is now calculated under a simpler “standardized approach” (with the internal loss multiplier set to one) instead of the old Basel II framework.
Throughout, the EU has built in extensive transition arrangements: capital output floors and phased implementation stretching into the early 2030s to give banks time to adapt.
Implications:
Banks operating in the EU should review their capital models now. The new Basel 3.1 rules mean higher risk weights under the revised standardized approaches for credit, and changes in market risk calculations, so institutions must validate their capital adequacy under the updated framework. Transitional relief (for example, a gradual phase-in of the output floor) will soften the short-term impact, but firms need to plan for the fully phased-in requirements by around 2032–2033.
In practice this means upgrading reporting systems and credit risk models well ahead of schedule, and tracking the Commission’s final delegated acts and EBA guidelines as these rules are fleshed out.
HKMA & BIS Innovation Hub Launch Project Noor on AI Explainability
Project Noor is a new joint initiative on artificial intelligence explainability spearheaded by the BIS Innovation Hub’s Hong Kong Centre in collaboration with the HKMA and the UK’s FCA. Launched in mid-2025, Noor aims to develop prototype tools and methods that help supervisors and banks interpret complex AI models in finance. It will explore technical XAI (explainable AI) techniques- such as translating model logic into plain language and visualizations- alongside governance frameworks to ensure transparency.
Notably, regulators worldwide are moving to require that high-risk AI systems in finance be explainable and auditable, but no common standard exists yet. Project Noor addresses this by giving supervisors practical tools (e.g. Shapley values, LIME) to see which factors drive an AI’s decision, while also recognizing the limits of these tools. HKMA officials have emphasized that Noor will provide banks and regulators with “model auditing prototypes” to enhance transparency and oversight of AI-driven decisions.
Implications:
Financial institutions using AI (in Hong Kong or elsewhere) should expect regulators to scrutinize explainability. Firms will likely need to demonstrate how their models’ outputs can be traced and justified.
In practical terms, banks should document their AI models’ design and governance, and be prepared to use or provide explainability tools (such as Shapley value analysis) to supervisors. The supervisory expectations will be proportional: simple models or low-impact use cases may need only minimal explanation, but critical high-impact models will need more extensive audit trails. Ultimately, firms should align their AI model risk practices so that they can show both robust technical explanations and strong oversight controls (data governance, human review, escalation procedures) to regulators
OSF Releases Guideline E-23 on Model Risk Management

Canada’s banking regulator (OSFI) has issued its final Guideline E-23 on Model Risk Management, with a publication date of September 11, 2025, and an effective date of May 1, 2027. Guideline E-23 sets out updated expectations for how banks and insurers must govern their quantitative models (covering anything from credit-loss forecasting to risk analytics, including AI/ML models).
The new Guideline expands scope (notably explicitly covering AI models and actuarial models) and strengthens the governance and documentation requirements. Key features include mandatory model inventories, clear role separation (e.g. independent validation functions), board-level oversight of model risk, and thorough model validation processes. OSFI emphasizes that these are principles-based (not banning any modeling approach) but require each firm to tailor its model risk framework to its size and complexity.
Implications:
Firms subject to OSFI regulation (banks and insurers) should begin multi-year planning to comply with E-23. The May 2027 compliance deadline means institutions have until late 2026 to finish changes. Preparations will involve establishing or strengthening an independent model validation unit, upgrading documentation standards (detailed model inventory, assumptions, limitations), and ensuring board and senior management actively review model risk frameworks. OSFI has stated these enhancements “promote responsible innovation and sound decision-making”, but also warned they are integral to financial safety.
In practice, this will require significant resourcing: firms should consider new hires or technology (for example, model inventory software) to manage the expanded volume of model governance tasks. Compliance teams should also coordinate with IT and actuarial areas, since Guideline E-23 explicitly covers any model used in decision-making.
Bank of England Proposes Retirement of Obsolete Banking Reporting Templates
The Bank of England and PRA have opened a public consultation (CP21/25) to retire a set of outdated regulatory reporting templates. This is part of the BoE’s “Future of Banking Data” initiative to streamline supervisory data collection. The proposal (issued September 22, 2025) would eliminate 37 legacy reporting templates, most of which were inherited from older EU regulations.
In particular, many FINREP (financial reporting) forms would go, reflecting that this granular data is either no longer needed or is collected through other reports. The BoE notes that these templates cover information that can now be obtained more simply, so removing them will reduce costs. The initial package targets templates related to financial statements and exposures; banks would continue to submit crucial risk data, but via the remaining streamlined filings. The consultation runs through October and aims to implement changes by January 1, 2026, yielding an estimated £26 million in annual burden reduction.
Implications:
Banking firms should review which reporting processes will change. Systems teams need to map the retiring templates and ensure data is either no longer sent or is fed into consolidated reports. Even if templates are deleted, firms must still meet all core reporting obligations under COREP/FINREP rules- they simply won’t submit those specific forms. Transition planning is key: banks will have to update their reporting systems and data warehouses before early 2026.
In practice, compliance and data teams should take this opportunity to audit their regulatory reporting flows and eliminate redundant fields, but also confirm that no critical credit or risk metrics are inadvertently lost in the streamlining. The BoE’s broader guidance suggests it will conduct future reviews, so firms should stay alert for more templates being cut in 2026 and 2027.
BIS Financial Stability Institute Publishes Paper on Managing Explanations in AI

The Bank for International Settlements’ Financial Stability Institute (FSI) has published a new paper on AI explainability challenges for supervisors (FSI Occasional Paper No. 24, September 2025). The paper reviews the “explainability gap” in advanced AI (like deep learning) and assesses how regulators might address it. It notes that while many authorities currently require only model governance rather than explicit explainability, emerging rules (for example under upcoming EU AI legislation) increasingly expect high-risk AI to be explainable and auditable. The FSI paper highlights that purely technical XAI methods (e.g. saliency maps, feature-importance scores) have limits in accuracy and stability, so supervisors should adopt a hybrid approach. This means combining technical tools (such as Shapley value attribution or LIME visualizations to indicate input influences) with strong governance measures (like enhanced data controls, “guardrails,” and human oversight when models operate outside known conditions).
The paper suggests proportionality: low-risk models may need only basic documentation, whereas “mission-critical” AI (e.g. in credit underwriting or capital calculations) should meet higher transparency standards. It also advises supervisors to tier requirements- for example, high-impact AI systems could be restricted to more interpretable model classes unless adequate mitigations are in place.
Implications:
Regulators are likely to start imposing tiered explainability requirements. Financial firms should align their model risk and consumer protection practices accordingly.
In concrete terms, institutions using AI for significant decisions should prepare to run explainability diagnostics (e.g. calculating input contributions) and document the results. They should also bolster governance: set up internal AI risk committees, track AI model decisions over time, and train staff on verifying model outputs.
Ultimately, the goal will be to show regulators not just a technical explanation, but also a robust process that ensures any opaque AI is subject to human review or fallback options. The hybrid framework means firms must balance model innovation with transparency- for example, by building “fail-safe” triggers that hand off decisions when the model is unconfident. Firms should therefore review their AI deployment policies now, update their model inventories to flag “high-impact” systems, and document both their technical explainability techniques and their governance controls.
Summary of Other Notable Updates
Jurisdiction | Regulator | Update | Source |
UK | FCA | Published CP25/25 (17 Sep 2025) on applying existing FCA Handbook rules to cryptoasset firms; this consult outlines how governance, SM&CR, Consumer Duty, etc. will extend to crypto | |
Eurozone | ECB/SSM | Amended the ECB FINREP reporting Regulation (9 Sep 2025) to add nine new credit risk data points. This strengthens supervisory review of smaller banks’ credit risk (effective Dec 2025) | |
Singapore | MAS | Issued Prohibition Orders against three former bank employees for unauthorized access to customer information under the new FSMA 2022 regime | |
European Union | EBA | Published advice (23 Sep 2025) on the EU Covered Bond framework review. It recommends enhanced harmonisation of national rules, greater transparency for investors, and possibly a third-country regime for non-EU covered bonds | |
United States of America | SEC | Voted on 17 September 2025 to approve generic listing standards for spot-commodity and digital-asset trust shares. | |
Australia | APRA | Published its 2025-26 Corporate Plan, outlining four strategic pillars including a heightened focus on cyber resilience under CPS 230. |
Stay informed with our regulatory updates and join us next month for the latest developments in risk management and compliance!
For any feedback or requests for coverage in future issues (e.g. additional countries or topics), please contact us at info@riskinfo.ai. We hope you found this newsletter insightful.
Best regards,
The RiskInfo.ai Team



