Algorithmic Risk and Impact Assessment: A Crucial Step Toward an Ethical Approach to AI
- Ayşegül Güzel

- Apr 14
- 6 min read
Updated: May 6

Today I want to talk about one of the basics in the AI Governance world: Algorithmic Risk Assessments.
As the EU AI Act is starting to apply in phases on the 2nd of February, you must understand what algorithmic risk assessment is, as many companies deploying or using AI systems will be obliged to have set algorithmic risk assessments in place.
However, it is not just because of the obligation of the regulations. Risk assessment is essential for having a starting point for AI governance, understanding risk mitigation (which risks prioritize and which controls to have), both for internal and external accountability and to inform further technical testing.
Maybe this last point is one of the most crucial, as right now, in the field, it seems to be the opposite. Many companies start by conducting technical tests and measuring certain metrics without fully understanding why they are using those metrics or why a technical test is necessary. When you have an algorithmic risk assessment in place, everything becomes clearer—including decisions on which metrics to use and which algorithmic tests to conduct.
You would need an algorithmic risk assessment if you are an algorithm developer looking to identify and mitigate risks, an algorithm buyer aiming to reduce risks, a regulator assessing whether an algorithm meets legal standards, or an external stakeholder making informed choices about using, investing in, or engaging with certain companies or algorithmic systems.
What is algorithmic risk and impact assessment?

To answer the question, "What is algorithmic risk assessment?" let's start with the basics.
What does "algorithmic" mean?
An algorithm is a step-by-step procedure or set of rules designed to solve a specific problem or perform a task. It is a sequence of instructions that takes an input, processes it, and produces an output.
Let me give you an example of an algorithm in use. Recently, I have been reviewing an algorithm used by Sweden's Social Security Agency. In Sweden, there is a welfare system for parents: when their children are sick, they can apply for financial support from the government to compensate for missed work. The Social Security Agency uses an algorithm to identify individuals with a high probability of committing fraud. This algorithm follows a sequence of instructions that takes in data, processes it, and ultimately assigns a score to each person, predicting their likelihood of engaging in fraud.
What is algorithmic risk?
Algorithmic risk is the possibility that an algorithm malfunctions, discriminates, or operates in a way that raises ethical concerns. In other words, it is the potential for an algorithm to cause harm.
Continuing with the same example above, there is a risk that the fraud detection algorithm could incorrectly predict that a person has committed fraud. This could lead to an unjust investigation and the wrongful denial of financial support to a parent in need. Such harms are quite possible with algorithms because these systems can introduce algorithmic biases into the environments they are embedded in. These biases may stem from statistical limitations, historical or social inequalities, or personal biases.
Algorithmic risk and impact assessment involves first analyzing both the potential positive and negative impacts of an algorithm and then assessing them.
You can think of it as being a detective for your algorithm—asking many curious and intelligent questions, conducting algorithmic tests, and documenting the findings in a structured form. In this form, you need to provide information about potential stakeholders, their interests, possible harms, the causes of these harms, and the metrics used to measure them. Additionally, you must assess the likelihood and magnitude of these harms (using a risk impact matrix), determine the overall risk level, and outline mitigation strategies. That’s it—simple!

Even though algorithmic risk assessment is relatively new, its origins can be found in traditional risk assessment frameworks such as ISO 31000 and COSO Enterprise Risk Management. More specifically, it is rooted in model and AI risk management frameworks such as SR 11-7: Model Risk Management, ISO/IEC 23894:2023, ISO/IEC DIS 42001, NIST AI RMF, and ForHumanity.
Since its inception, there has been a strong emphasis on the socio-technical context, which is extremely important to me. All these foundational frameworks address not only algorithmic risk and impact assessment but also socio-technical assessments.
I can almost hear you saying, "Wait a minute—what do you mean by yet another term now?"
What is a socio-technical algorithm?
Ok, let's all take a deep breath and relax. Understanding what a socio-technical algorithm is is actually quite simple—and crucial for everyone involved in risk assessments.
In The Algorithm Audit paper, the term socio-technical algorithm is explained as follows:
“Understanding the context within which the algorithm is deployed means assessing and understanding a range of broader social and political facts about its stated purpose... It might include the process of development of the algorithm... preparing the data for the training algorithm, the process of delivering an algorithm to its primary user, and often, most importantly, the setting within which it is used.” — From “The Algorithm Audit,” Brown et al. (2021)
In the BABL AI audit certificate program, there is a strong emphasis on this concept. They highlight the risks of harm through the CIDA Framework, underscoring its importance in responsible AI governance.
It simply invites people working in the algorithmic space to view algorithms not only as technical tools or solutions but as part of the broader socio-technical systems they are embedded in. This means discussing the context, inputs, decisions, and actions related to algorithms, which involves asking deeper questions for each:
Context of the Algorithm’s Use
Purpose: What is the intended function or objective of the algorithm?
Social/Political Environment: What external factors influence its deployment, such as societal norms or regulatory policies?
Stakeholders: Who is affected by the algorithm, and what are their interests and concerns?
Inputs
Types of Data: What data is used for training and operation?
Collection Process: How is data collected, and what biases might exist in this process?
Resulting Filter Effects: How does the data influence who or what gets included or excluded?
Decision Procedure
Algorithmic Output: What results or classifications does the algorithm produce?
Human-Mediated (or Not): Are decisions reviewed or modified by humans, and how does this influence outcomes?
Actions Taken
How Decisions Are Used: What actions follow the algorithm’s output?
Consequential Impact: What are the downstream effects on individuals, organizations, or society?
When you begin to ask these deeper questions to a range of stakeholders, you start conducting a thorough risk impact assessment of the socio-technical algorithms at hand. You can imagine the process as being similar to the following steps:
Building a diverse, cross-functional team
Defining the socio-technical context of the algorithm's deployment
Identifying key stakeholders and their roles
Examining potential interests, rights, and harms
Investigating the root causes of harm or rights violations
Mapping harms to their underlying causes
Assessing and prioritizing risks based on impact and likelihood
Documenting identified risks, existing mitigation measures, and any remaining concerns

Assessing and evaluating a socio-technical algorithm is not easy; it is an iterative process that takes time and requires the perspectives and worldviews of diverse stakeholders. Understanding and anticipating the impacts and harms of technology demands creativity, critical thinking, imagination, and information. Knowing real-world harms, harm types, metrics, and regulations can be incredibly helpful in this process!
Resources for sociotechnical algorithmic impact assessment
Real-World Examples of AI Harms: One way to gather information about real-world harms is by following the news!
Incident Databases: Another way is to familiarize ourselves with AI incidents. Incident databases, such as the AI Risk Repository, OECD AI Incidents Monitor, and the AI Incident Database, are great for this.
Harm Types: While there are still no established standards or taxonomies for potential algorithmic harms, many resources can be very helpful. I discussed this issue in detail in my previous newsletter article, so feel free to take a look!
Emerging Laws & Regulations: Many interests or rights are protected by law, such as civil liberties, human rights, and international human rights standards. When we talk about harm, we are referring to both illegal issues and unfair practices, impacting individuals and communities alike. Following emerging laws and regulations is crucial for operating responsibly in the AI space.
Metrics of a Socio-Technical System: To measure potential harms, we need metrics such as bias, effectiveness, transparency, direct impacts, and security/access. You can refer to the NIST AI RMF Playbook and the paper: The Algorithm Audit: Scoring the Algorithms That Score Us, for more details.
Ok, so we have different tools in our pockets and are prepared for detective work!
(The author is an AI governance consultant and trainer. She has conducted several hands-on workshops and training on algorithmic risk and impact assessments. If you're interested in participating in one, feel free to DM her on her page AI of Your Choice! Views expressed are personal.)


Comments