The Transformative Power of AI in Market Risk and FRTB
- Ravi Bhushan
- 5 days ago
- 10 min read

Global banks face immense pressure to align daily risk calculations with constantly evolving FRTB mandates and to transform market risk practices. The Fundamental Review of the Trading Book (FRTB) marks a significant shift in how banks measure and manage market risk, allocate regulatory capital, and demonstrate compliance at the trading desk level. Designed to be more risk-sensitive, transparent, and consistent, FRTB urges institutions to overhaul their risk architectures, upgrade methodologies, modernise data pipelines, and develop stronger models that better reflect actual market exposures.
Implementing FRTB, especially the mandatory Standardised Approach (SA) and Internal Models Approach (IMA), presents significant challenges. The IMA requires rigorous model eligibility testing, extensive risk factor coverage, and robust P&L attribution, all of which test existing risk platforms and governance frameworks. A particularly complex issue is the classification and capital treatment of Non-Modellable Risk Factors (NMRFs), which involve intricate data validation and proxy modelling. These challenges affect not only capital calculations but also the accuracy and credibility of risk measures used across the organisation. The complexity of these requirements often causes delays in FRTB implementation timelines across multiple jurisdictions as regulators and institutions grapple with the scale of change needed. In this context, the role of AI/ML in streamlining these processes and enhancing efficiency becomes crucial.
Many institutions still rely on Greek-based historical VaR models and legacy systems. However, there is a growing shift toward Monte Carlo simulations, complete revaluation (FR), and Risk Not in VaR (RNIV), particularly for desks dealing with exotic or illiquid assets. Central to the FRTB framework is the Risk Factor Eligibility Test (RFET). Failing to meet modelability criteria results in NMRFs, which incur significant capital charges through Expected Shortfall (ES) and stress scenario multipliers. Moreover, the industry's transition from LIBOR to Alternative Reference Rates (ARRs) has introduced new data complexities and tests of legacy risk infrastructure robustness.
Amid these challenges, Artificial Intelligence (AI) and Machine Learning (ML) are emerging as powerful tools to modernise market risk management. From automating data ingestion, management and anomaly detection to improving P&L attribution and proxy modelling, integrating AI into risk workflows not only enhances data traceability, optimises NMRF treatment, refines ES/DRC, and improves PLAT test results, but also increases capital efficiency and reduces operational costs. These benefits highlight the value of AI/ML in FRTB implementation and promote its broader adoption.
This white paper explores how AI and ML are transforming the implementation of the FRTB framework. Key takeaways include understanding the complexity and challenges of successful FRTB adoption, hedging strategies, and leveraging AI to reshape the market risk process.
FRTB SA – Standardised Approach
The FRTB Standardised Approach (SA) aligns with VaR principles, using market-wide risk weights and correlations, including holding period and confidence level, to allocate capital based on detailed risk exposures. The SBM within SA identifies risk drivers like interest rate curves, credit spreads, and equity volatilities. It accounts for delta, vega, and curvature risks across categories, with straightforward aggregation promoting consistent capital calculations among institutions.
Challenges in Standardised Approach
The Standardised Approach provides a detailed framework for market risk capital reporting. Risk factor mapping errors can cause overstated capital or regulatory issues, leading to data surges, integration problems, and differences in results, reconciliations and making the system heavier and slower. Accurate and consistent calculation of delta, vega, and curvature sensitivities is crucial across systems. It handles non-standard or hard-to-model instruments separately through RRAO and employs a look-through approach for basket funds/ETFs. The goal is to calculate capital charges that reflect the actual risk exposures of the underlying assets.
Data Infrastructure, ingestion, and maintaining the lineage of data
Data quality and risk factor mapping at a detailed level (e.g., bucketing)
Infrastructure to process reporting, compute, and aggregate risk sensitivities across desks
Alignment and integration of SA results with internal risk views for management reporting
Risk factor, class, and bucket mapping, along with integration of results at each level
How AI/ ML can help – FRTB SA
Sensitivities Monitoring - Risk Dashboard
Track real-time or EOD sensitivities across risk classes (Rates, Credit, FX, Equity, Commodity).
Anomaly Detection: Identify unusual spikes or drops in sensitivities using models like Isolation Forest or One-Class SVM.
Interactive heatmap by risk class & bucket. Use classification models to map instruments to the correct risk class, bucket, and sector based on ISIN, issuer name, or features (region, rating, industry).
Train on labelled mapping data from existing instruments.
Capital Charge Breakdown - Risk Dashboard
Capital Optimisation: ML suggests exposure reallocation (e.g., change weights or hedges) to reduce capital.
What if Scenarios: Use ML models to simulate changes in risk factors and see the capital impact instantly.
Sunburst or treemap view of capital by component
Trend & Attribution Analysis - Risk Dashboard
Attribution Models: Use regression or tree-based models to decompose capital changes into risk factor-level drivers.
Line graphs with overlays of attributions
Alerts & Early Warnings - Risk Dashboard
Proactively warn about modellability issues, capital spikes, or failed limits.
Predictive alert engine trained on historical limit breaches or risk factor failures.
Alert dashboard with severity, reason, and the best action
Email/Slack integration for push notifications
Example Use case - A large global bank implemented a deep learning-based monitoring system to flag early signs of spread widening in high-yield credit portfolios, allowing dynamic hedging before end-of-day risk metrics were breached.
Use Case | ML Solution | Benefit |
Instrument-to-Bucket Mapping | Classification models (e.g., Random Forest, XGBoost) trained on labelled trade data | Reduces manual errors and ensures consistent bucketing |
Curve Point Grouping | Clustering algorithms (e.g., K-means, DBSCAN) to group correlated tenors | Better tenor allocation in IR risk buckets |
Mapping New Instruments | NLP + ML to classify based on issuer/sector/region from trade descriptions or ISIN metadata | Enables dynamic, scalable mapping for new products |
Sensitivity Calculation | Surrogate ML pricing models (e.g., XGBoost, neural nets) | Rapid, approximate Greeks where analytical models are too slow |
Pipeline Automation | ML-integrated ETL workflows (e.g., using Airflow + model scoring) | Automatic updates of capital metrics at EOD or intraday |
Capital Attribution | SHAP, TreeExplainer | Attribute capital to instruments or desks for internal optimisation |
Sensitivity Audit Tools | Outlier detection on submitted sensitivities (e.g., Isolation Forest) | Ensures regulatory consistency and highlights deviation from peer norms |
P&L Attribution Model Performance - Key for IMA Desk Eligibility
P&L attribution tests are vital for confirming the connection between risk factors and profits or losses. By analysing P&L changes by risk source, these tests help validate the accuracy of internal market risk models. Under the FRTB framework, the Internal Models Approach (IMA) emphasises the importance of precise revaluation of capture risks, especially for non-linear instruments. It also requires at least 75% risk factor coverage during stress periods at trading desks, which is essential. Calculating Risk Theoretical P&L (RTPL) is a crucial step; it involves assessing sensitivities to evaluate the market's impact on profit and loss (P&L), aligning with front office and risk models. Incorporating RTPL involves managing complex modelling and data challenges. Two main statistical performance tests are used: Spearman correlation and the K-S test, to ensure proper alignment. These tests verify the correlation and distribution between Hypothetical P&L (HPL) and RTPL, confirming consistency between the front office and our risk measures. However, it’s worth noting that PLAT still faces challenges with the Spearman correlation test for certain desks when analysing the hedged P&L time series.
Challenges in P&L attribution test
Risk factor mapping, trade-booking errors, and data gaps often lead to PLAT failures.
Over-hedging or complex hedging strategies: Hedged portfolios often result in low net P&L, which can cause correlation metrics to become unstable or misleading.
Model granularity mismatch: RTPL might not fully capture all risk factors influencing HPL (e.g., idiosyncratic risks, basis risks).
Sensitivity approximation errors: In non-linear portfolios (e.g., options), delta/gamma approximations may differ from full revaluation of HPL.
Data or infrastructure issues: Gaps in market data history, inconsistent trade booking, or misalignment between front office and risk systems.
Portfolio effects: Desk-level aggregation can hide relationships that are visible at more granular levels.
Mitigation Strategies
Re-express PLAT on sub-portfolios: Decompose the desk into more homogeneous books (e.g., long only or unhedged segments). Apply PLAT separately and aggregate pass/fail metrics cautiously.
Improve RTPL accuracy: Incorporate higher-order Greeks for non-linear instruments. Improve risk factor mappings and recalibrate sensitivity models more frequently.
Adjust governance and control frameworks: Enhance data quality controls, such as reconciliation between trade and risk systems. Review model assumptions and ensure alignment with actual portfolio strategies.
Engage with regulators: If Spearman tests keep failing, desks may need to switch to the Standardised Approach (SA). If performance worsens because of hedging effects not reflected in RTPL, consider asking for exceptions or providing extra documentation.
How AI/ ML can help – PLAT
Training machine learning models like neural networks and gradient boosting improves their ability to map nonlinear relationships. AI detects patterns in hedging strategies, aiding in predicting residual risks often overlooked by traditional models. It supports proxy modeling for risks that are illiquid or difficult to model and helps forecast capital impacts during market simulations. NLP tools enable comparison of FO narratives with risk model outputs. Enhancing RTPL estimation is crucial for exotic derivatives or illiquid instruments. Additionally, AI can simulate historical stress scenarios through interpolation or extrapolation when data is limited.
Automated Anomaly Detection in time series - Catch unexplained deviations in P&L or risk factor behaviour early.
Implementing automated anomaly detection in risk factor coverage and P&L time series presents a valuable opportunity.
Validate 75% risk factor coverage compliance under historically plausible but unobserved stress conditions.
Using LLMs to extract and reconcile trade data across our systems, including FO, risk, and finance, is advantageous. LLMs are also highly useful for generating key documents like regulatory papers, model validation reports, and compliance narratives.
Example use case - A UK Tier 1 bank uses GPT-style LLMs to generate PRA audit-ready narratives for FRTB model change applications, reducing report turnaround time by 60%.
Non-modellable Risk factors (NMRF) – Market Data challenge
Non-modellable Risk factors (NMRF) – Market Data challenge
The RFET ensures accurate mapping of risk factors, adhering to principles and capital requirements in the IMA approach. Risks that do not meet RFET and NMRFS criteria result in higher capital charges in IMCC SES due to insufficient diversification and hedges. Banks need to review RNIVs and enhance risk reporting. Implementing FRTB is challenging because identifying and validating market data for NMRFs is complex. Capital charges depend on worst-case scenarios. Proxying involves replacing or estimating NMRFs with similar, more liquid risk factors, especially when direct measurement isn't possible. Applying IMA data principles can strengthen risk management and the use of third-party data. Maintaining FRTB data rules is essential for NMRFS.
Challenges in NMRF/ RFET
Strict modelability criteria: Risk factors must have real price observations for at least 24 days in the previous 12 months across a given liquidity horizon.
Capital for NMRFs is based on stressed Expected Shortfall under worst-case loss scenarios.
Difficulty in identifying proxies with similar risk behavior to reduce the NMRF impact.
Existing infrastructure may be insufficient to handle the granular requirements of RFET and ES calculations.
A heavy reliance on third-party data vendors for market data may introduce challenges related to data licensing and validation.
How AI/ ML can help – NMRF
Use factor models (e.g., Principal Component Analysis, Kalman filters) to decompose NMRFs into systematic and residual risk. Retain only the residual component as the true NMRF, which helps isolate what needs capitalisation. Introduce volatility scaling or stress-period adjustment factors to align the proxy's risk profile with the actual observed behavior of the NMRF. Employ conditional correlation analysis to ensure proxies remain valid under changing market regimes.
Modern proxying frameworks, enhanced by statistical and AI-based techniques, can:
Decompose NMRFs into:
Systematic components (correlated with modellable factors)
Residual components (idiosyncratic risk, still treated as NMRF)
Map illiquid instruments (e.g., exotic options, local-market bonds) to more liquid benchmarks.
Enable portfolio-level netting effects, reducing the standalone impact of hard-to-model factors.
Proxy Mapping Algorithm - Use reinforcement learning or evolutionary algorithms to adapt proxy selection as market conditions change dynamically.
Supervised ML models (e.g., Random Forest, XGBoost) can predict NMRF behavior based on modelable proxies, optimising capital by minimising proxy error.
Model selection frameworks help choose proxies that offer optimal trade-offs between correlation and volatility.
Synthetic Data Generation: Generative models (e.g., GANs, Bayesian networks) can simulate plausible missing prices or reconstruct gaps between observations.
Scenario Generation:
ML models can create realistic tail scenarios for NMRFs, better aligning with ES-based IMCC calculations.
Reinforcement learning can adaptively simulate portfolio behavior under varying risk factors and stress scenarios.
Data Quality Scoring:
ML models can assess and classify third-party data quality (e.g., bid/ask spread, frequency, timestamp granularity).
Powering a Red-Amber-Green (RAG) framework to track and prioritize NMRFs by remediation need.
Implement graph-based AI tools (e.g., knowledge graphs) to map data flow and link trade source → RF → model input, → capital output.
Model Performance and Regulatory Reporting
Model Performance Monitoring: ML-powered dashboards can track model drift, stability, and backtesting outcomes in near real-time.
Stress Testing Automation: AI can automate scenario design and evaluate the model’s behaviour under synthetic stress conditions.
Benefits and Challenges of AI/ML in Risk Management
Enhanced Predictive Power
Machine learning models can capture non-linear relationships and complex interactions that traditional models (e.g., VaR, stress testing) miss.
Improved forecasts of market volatility, credit defaults, and liquidity events
Real-Time Risk Monitoring
AI enables dynamic risk dashboards, monitoring exposures in near real-time.
Faster detection of unusual trading patterns, emerging market shocks, or liquidity squeezes.
Challenges
Explainability & Transparency
Regulators demand explainability under SR 11-7, ECB TRIM, PRA SS1/23, and FRTB frameworks.
Model Risk & Validation
Traditional model validation frameworks are not always suited for AI.
Overfitting and concept drift (market regime shifts) can quickly invalidate models.
Summary
Artificial Intelligence (AI) and Machine Learning (ML) are transforming how market risk is measured, monitored, and managed, especially within the regulatory framework of the Fundamental Review of the Trading Book (FRTB). By enhancing data analysis, boosting model performance, and automating regulatory reporting, AI enables more efficient compliance, capital optimisation, and risk transparency. Successful implementation requires not only advanced modeling techniques but also robust data governance and alignment with regulatory standards and risk processes. This involves establishing new infrastructure and processes for regulatory reporting. Integrating AI into the capital calculation, risk attribution, and data quality processes of both SA and IMA frameworks allows banks to streamline compliance, reduce capital requirements, and develop resilient, future-ready market risk infrastructures. As banks adopt FRTB’s Internal Models Approach (IMA), AI becomes a strategic enabler—facilitating risk factor modellability, improving stress testing, supporting explainability, and ensuring end-to-end data lineage.
Far from being experimental, AI is now vital to building a resilient, agile, and forward-looking market risk function that meets evolving regulatory and business demands. By embedding AI into the FRTB framework, institutions can accelerate compliance efforts while enhancing capital efficiency, data traceability, and model robustness. The use of AI is no longer experimental; it is increasingly a competitive necessity for future-proof market risk management.
References
1. FRTB Final Paper: https://www.bis.org/bcbs/publ/d457.htm
2. Regulatory Technical Standards on capitalising non-modellable risk factors under the FRTB | European Banking Authority. https://www.eba.europa.eu/regulatory-technical-standards-capitalisation-non-modellable-risk-factors-under-frtb
3. RRA CP16/22 – Implementation of the Basel 3.1 standards: Market risk: risk :https://www.bankofengland.co.uk/prudential-regulation/ publication/2022/november/implementation-of-the-basel-3-1-standards/market-risk
4. PRA Letter 2025: https://www.bankofengland.co.uk/news/2025/january/the-pra-announces-a-delay-to-the-implementation-of-basel-3-1
EBA 2025 consultation - https://www.eba.europa.eu/sites/default/files/2025-02/4399a47b-858a-4abe-8e80-674e44c6f895/Consultation%20paper%20on%20amending%20draft%20ITS%20on%20benchmarking%20of%20internal%20models.pdf

Ravi Bhushan
is a Director at Solytics Partners. Working at Solytics, Ravi advises various large global and regional banks across multiple jurisdictions on market, modelling, and model risks. Solytics Partners is a global services and solutions provider in Risk, Compliance, Analytics and Technology. For more information about their work, you can visit https://www.solytics-partners.com/.