
Current Thinking
Point of view on what
matters right now.

Curriculum: A Bird’s-Eye View
AI/ML Model Lifecycle: development, testing, deployment, monitoring
Governance frameworks: second-line oversight, policy design, risk controls
XAI & Responsible AI: explainability metrics, fairness, bias, drift, stability
Validation of ML systems: conceptual soundness, outcomes analysis, benchmarking
Regulations: EU AI Act, MAS, OCC/Fed SR 11-7, PRA/ECB expectations, NIST
Documentation: model governance documents, inventories, decision logs, audit trails
Monitoring controls: performance drift detection, data drift, robustness checks

Who Should Attend?
04
Risk managers overseeing algorithmic models and automated systems
01
Model validation and model risk teams expanding into AI
02
Data scientists who need regulatory and governance context
03
Technology leaders overseeing AI deployment within regulated financial environments.

What Distinguishes Certificate in AI Model Risk & Governance of Riskinfo.AI
Designed by practitioners from leading banks, fintechs, and Big 4 consulting firms
Translates complex regulatory requirements into actionable governance processes
Includes downloadable templates for inventories, validation checklists, and governance policies
Focuses on skills required in high-demand AI assurance and Responsible AI roles

Career Opportunities

AI Compliance & Policy Associate

Risk Technology & Controls roles in BFSI

AI/ML Model Risk Analyst

Model Validator (AI/ML focus)

Responsible AI / AI Governance Specialist

Eligibility
Professionals working in risk, analytics, compliance, audit, or governance roles.
Data scientists seeking regulatory, governance, and validation context for models.
Technology leaders overseeing AI deployment within regulated financial environments.
Graduates aspiring to careers in model risk or AI governance.
