Don’t Over-Complicate Responsible AI: A Pragmatic Starter Guide
- Ayşegül Güzel

- 12 minutes ago
- 7 min read

Why this post?
Every month seems to bring a new “must-read” AI-governance framework, and many teams freeze—waiting until they’ve read them all. Over the past few weeks I:
Re-read the NIST AI Risk-Management Framework (AI RMF)
Studied BABL AI’s 2023 governance report
and, during an Algorithmic Bias Lab Q&A, asked Dr Shea Brown what he would do on Day 1 as Responsible-AI Officer for a company that provides high-risk AI systems in Europe.
The #1 lesson cutting across every source is simple:
Start small, document what you do, and improve iteratively.
Below is a condensed field-guide you can use to move from “theory-overload” to real progress.
1. NIST AI Risk-Management Framework (AI RMF)
The NIST AI Risk Management Framework offers a voluntary, flexible approach for organizations of all sizes to manage AI risks across sectors, focusing not just on minimizing negative impacts but also on identifying opportunities to maximize positive outcomes and enhance overall system trustworthiness.
The NIST framework places "People & Planet" at its core, emphasizing that effective AI risk management requires collaboration among diverse stakeholders throughout the AI lifecycle. As illustrated in Figures 2 and 3, various actors—including developers, testers, end-users, and advocacy groups—contribute unique perspectives that help identify risks, establish operational boundaries, and balance societal values. The framework particularly values Testing, Evaluation, Verification, and Validation (TEVV) expertise for ongoing risk assessment, allowing both mid-course corrections and post-deployment management, while diverse teams create opportunities to surface problems and make implicit assumptions explicit.
The AI RMF Core provides outcomes and actions that enable dialogue, understanding, and activities to manage AI risks and responsibly develop trustworthy AI systems. As illustrated in Figure 5, the Core is composed of four functions: GOVERN, MAP, MEASURE, and MANAGE. Each of these high-level functions is broken down into categories and subcategories. Categories and subcategories are subdivided into specific actions and outcomes. Actions do not constitute a checklist, nor are they necessarily an ordered set of steps.
The Four Core Functions of NIST's AI Risk Management Framework
GOVERN
Creates the foundation for all AI risk management activities by cultivating an organizational culture of responsibility. This function establishes leadership commitment, policies, processes, and accountability structures that align AI development with organizational values and strategic priorities. GOVERN is a cross-cutting function that infuses risk awareness throughout the organization's hierarchy.
MAP
Establishes the critical context for understanding AI risks by identifying potential impacts across the system's lifecycle. This crucial first step involves documenting interdependencies, gathering diverse perspectives from stakeholders, and checking assumptions about how systems will be used. Without this foundational understanding of contextual risks, organizations cannot effectively proceed to measurement or management.
MEASURE
Builds on the risks identified in MAP by applying quantitative and qualitative tools to analyze, assess, benchmark, and monitor AI risks. This function involves rigorous testing before and during deployment, tracking metrics for trustworthiness characteristics, and documenting system functionality. The measurement process provides the evidence base needed for informed management decisions.
MANAGE
Completes the cycle by using insights from both MAP and MEASURE to allocate resources effectively and implement mitigation strategies. This function involves prioritizing risks based on their projected impact, developing response plans, establishing recovery procedures, and creating mechanisms for continuous improvement.
The power of this framework lies in its sequential logic: you must first understand your risks (MAP) before you can effectively measure them (MEASURE), and only with proper measurement can you implement appropriate management strategies (MANAGE)—all within a governance structure (GOVERN) that ensures organizational alignment and accountability.
In essence, GOVERN creates the environment and structure in which effective risk management can occur (the "rules of the game"), while MANAGE represents the actual day-to-day execution of risk mitigation activities (the "playing of the game") for specific AI systems based on what was learned during the MAP and MEASURE functions.

2. BABL AI's Research Paper on The Current State of AI Governance
According to BABL AI's research paper "The Current State of AI Governance," organizations can implement effective AI governance through a staged, practical approach that adapts to their maturity level. For organizations just starting out, BABL recommends a four-step process: (1) assemble a cross-functional committee with diverse expertise and clear authority, (2) develop and publish AI ethics principles aligned with corporate values, (3) create a comprehensive inventory of all algorithmic systems, and (4) deploy initial policies including risk review processes and metrics.
The research highlights essential governance tools including designated roles (Chief AI Ethics Officers, Responsible AI officers), AI ethics and compliance training, frameworks, KPIs, values statements, data sheets/record keeping, procurement guides, and cross-functional teams. Additional controls include transparency documentation, stakeholder engagement policies, best practices documentation, intended use statements, internal/external committees and audits, and AI risk and impact assessments that support intentional system design and development.
For more advanced organizations, BABL recommends engaging external stakeholders, deciding between bottom-up or top-down governance approaches, developing meaningful metrics (like increased internal challenge, repeated surveys of affected users, bias reduction measurements, and streamlined assessment processes), and establishing inventories and repositories. BABL also suggests implementing federated governance—a decentralized approach where governance responsibilities are distributed across different organizational units while maintaining central oversight—particularly valuable for organizations with multiple AI systems serving different purposes, diverse AI sources, or varying risk levels. This practical guidance aligns with the "do not overcomplicate" philosophy, offering organizations a clear starting point while acknowledging that governance approaches must evolve with organizational maturity.

3. ISO 42001: Building on Familiar ISO Frameworks for AI Governance
Harmonized Structure:
Unifies common management elements while isolating AI-specific considerations
Creates streamlined systems that integrate with existing ISO standards
Foundation in Established ISO Standards:

Core Requirements:
Understanding Organizational Context: Balances external requirements (regulatory, market, stakeholder) with internal factors (capabilities, limitations, existing governance)
Leadership: Establishes clear accountability and direction
Risk Management: Implements ongoing socio-technical risk processes
Support Framework: Ensures proper resource allocation and competency development
Performance Evaluation: Requires regular assessment and improvement cycles
Control Implementation Guidance:
Governance: Defines policies, roles, responsibilities, and reporting mechanisms
Resources: Manages data, tools, computing, and human resources
Operations: Oversees lifecycle and usage considerations
Assessment: Coordinates impact analysis, stakeholder communication, and incident reporting
Annex A: Provides blueprint of required controls
Implementation Roadmap:
Begin with organizational context and needs assessment
Identify gaps in current state versus requirements
Plan resource allocation (human, technical, financial)
Balance immediate needs with long-term sustainability
Tailor implementation approaches by control type
Align success measures with organizational objectives
The standard offers organizations a familiar management system structure while addressing the unique challenges of AI governance, making it accessible for those already using other ISO standards.
4. Dr Shea Brown’s Playbook for the EU AI Act
I asked Dr Shea Brown to imagine Day 1 in the role of Responsible-AI Officer for a provider of high-risk AI systems in Europe. Here’s the step-by-step plan he outlined.

1. Understand the Organizational Structure
Identify current governance setup.
Check for ISO 27001 or similar certifications.
If financial services, look for model risk management structures.
Determine where the legal compliance team sits.
Identify if there’s a separate risk team.
Identify if there’s an internal audit function.
Figure out who needs to be engaged.
2. Inventory of AI Systems
Create a large spreadsheet or use whatever system the organization prefers.
List every AI system being provided externally (products).
List every AI system being used internally (operations).
Understand this will be a lot of manual work.
3. Create a Risk Categorization Questionnaire
Develop a questionnaire to triage AI systems based on risk (not a full assessment yet, just categorization).
Focus first on systems likely to be high risk: In financial services: fraud detection, anti-money laundering, credit scoring. In HR: hiring and promotion algorithms.
Recognize there are: Clearly high-risk systems (regulated or critical). Ambiguous systems (might need external counsel evaluation). Low or minimal risk systems.
4. Explore AI Literacy and Legal Compliance
Identify who has been educated on AI and compliance (find allies).
Ensure compliance with AI literacy requirements (AI literacy is mandatory as of February).
Check if any systems are prohibited under regulations (e.g., emotion detection, biometric scraping from CCTV).
5. Training and Education Initiative
If AI literacy is inadequate:
Request budget from leadership for AI training.
Train AI compliance team and other key people (e.g., HR teams using hiring algorithms, developers).
6. Triage the Inventory List
Assume some systems will be classified as high-risk.
For high-risk systems, start referring to Chapter 3 of the EU AI Act: Quality management systems, Risk management systems and Transparency documentation.
7. Develop a Risk Assessment Methodology and Framework
Start with risk assessment (before full risk management).
Reach out to security, privacy, or 27001 teams to understand existing risk assessment pipelines.
If existing mechanisms are inadequate, design a new process: Understand company’s preferred communication platforms (e.g., Confluence, Jira, SharePoint, Teams). Create forms, processes, and policies for AI risk assessments aligned with Article 9 of the AI Act.
8. Execute Risk Assessments
Manually reach out to product owners to walk through the risk assessment questionnaire.
Document results systematically.
Maintain a risk register: Include risks, stakeholders, probabilities, magnitudes, and likelihoods.
9. Build Supporting Documentation
While doing the above, simultaneously develop documents for:
Risk Management System (Article 9).
Quality Management System (Article 15, 13).
10. Ongoing Strategy
Understand you are systematically building towards compliance and governance.
Inventory and organizational structure are the foundations.
Risk categorization and assessment are the core processes.
Quality Management Systems will be built on top of those foundations.
Change management processes will be integrated later.
Conclusion: Progress Beats Perfection
Every framework above agrees on two things:
Inventory and context come first. You can’t manage what you haven’t mapped.
Small, documented steps trump grand, unstarted programmes.
So pick one action this week—publish a one-page AI policy, spin up a shared spreadsheet of models, or schedule bias-testing for your flagship algorithm. Celebrate that progress, then tackle the next item.
Responsible AI isn’t a destination; it’s a continuous workout. Start light, add weight gradually, and you’ll build the organisational muscle that regulations—and your customers—now expect.
What’s the very first step you’ll take? Share it in the comments—let’s learn from one another.
About the Author

Ayşegül Güzel
is a trusted voice in AI governance and responsible AI, guiding organizations to develop ethical, regulation-ready AI systems. With a background spanning social entrepreneurship (as an Ashoka Youth Fellow), innovation consulting, and data science, she brings a rare, holistic perspective to building trustworthy AI. As the founder of AI of your choice, Ayşegül works as a consultant, trainer, speaker, and writer—merging technical expertise with deep social insight. A Certified AI Auditor (BABL AI) and TED speaker, she specializes in aligning AI development with human values and systemic impact. Fluent in English, Turkish, and Spanish, Ayşegül collaborates globally to shape AI that serves people, not just progress.









Comments