Understanding and Mitigating Algorithmic Bias: A Comprehensive Guide
- Ayşegül Güzel
- Jul 3
- 9 min read
According to Britannica, bias is a tendency to believe that some people, ideas, etc., are better than others, which usually results in treating some people unfairly.

Bias is a part of the human experience. You cannot simply say, "Oh no, I never have bias." In fact, the field of bias in human experience is often referred to as a blind spot, meaning that biases—whether cognitive (e.g., confirmation bias) or social (e.g., racial bias)—operate outside of our conscious awareness, influencing our judgments and decisions without us realizing it. Similarly, Jung's concept of the shadow consists of repressed, unacknowledged aspects of ourselves that still affect our behavior.
Jung argued that what we reject in ourselves often gets projected onto others. This aligns with bias—especially in-group/out-group biases—where we unconsciously attribute negative qualities to others because they reflect something unrecognized within ourselves. Many social biases (e.g., sexism, racism) stem from deep-seated fears, insecurities, or aspects of the self we refuse to confront. For example, if someone represses their own aggressive tendencies, they may perceive others (especially those who are different from them) as aggressive or dangerous. Jungian psychology suggests that integrating the shadow is key to individuation (becoming a whole, self-aware person). Likewise, overcoming bias requires self-reflection, critical thinking, and confronting uncomfortable truths about ourselves and our assumptions.
As you can see, as a community facilitator who has worked to facilitate communications for personal, interpersonal, and systemic change, the topic of human bias is one I have been working on for many years, both personally and professionally. However, today our topic is algorithmic bias!
It is interesting that we use the same term, "bias," for algorithms as well. According to Wikipedia:
“The earliest computer programs were designed to mimic human reasoning and deductions, and were deemed to be functioning when they successfully and consistently reproduced that human logic. In his 1976 book Computer Power and Human Reason, artificial intelligence pioneer Joseph Weizenbaum suggested that bias could arise both from the data used in a program, but also from the way a program is coded.
An early example of algorithmic bias resulted in as many as 60 women and ethnic minorities denied entry to St. George's Hospital Medical School per year from 1982 to 1986, based on implementation of a new computer-guidance assessment system that denied entry to women and men with "foreign-sounding names" based on historical trends in admissions.”
Whart is algorithmic bias?
Algorithmic bias refers to systematic and recurring errors in a computer system that lead to unfair or skewed outcomes, often favoring one group over another in ways that deviate from the algorithm’s intended purpose.
It is quite interesting, though, that many algorithms declare their objectives as removing human bias. However, it is important to understand that AI is not bias-free. On the contrary, AI can have biases and, in many cases, expand upon human biases.
In this article, we will discuss different types of algorithmic bias (it is impossible to categorize them all—just think about it, can you categorize all human biases? No. So algorithmic biases sound even more complicated than that!) as well as various mitigation strategies available.
Different Kinds of Algorithmic Bias
First, I will start by referring to the NIST article: "Towards a Standard for Identifying and Managing Bias in Artificial Intelligence"
The article does a great job of illustrating the complexity of bias in the algorithmic dimension and provides a comprehensive, though not exhaustive, list. We do not know and may never fully understand the hidden biases in the systems we create. I believe this is a valid point. If you disagree, please let me know—I would love to hear your perspective on this!
The article categorizes the bias space in the AI field into systemic biases, human biases, and statistical/computational biases.
Here, you can see the sub-categorization of different types of algorithmic bias.
Moreover, the figure below provides examples of how the three categories of bias—systemic, statistical/computational, and human—interact and contribute to harms within the data and processes used in AI applications, as well as the validation procedures for determining performance.
After discussing the overall bias categories, today I will provide you with a list of the most common types of AI bias as presented by Jeffery Recker in his course, "Understanding Algorithmic Bias," at BABL AI.
Algorithmic bias occurs when algorithms systematically make decisions that unfairly disadvantage specific groups of people. This can happen due to flaws in data collection, model training, or deployment, leading to unintended discrimination. Below are common types of algorithmic bias:
Gender Bias – Occurs when one gender is favored over another or excluded from results. For example, a hiring algorithm that prioritizes men over women for certain roles or fails to recognize transgender individuals.
Racial Bias – Arises when an algorithm disproportionately favors or discriminates against certain racial groups. An example is AI in the criminal justice system, which may inaccurately predict higher recidivism rates for certain racial demographics.
Age Bias – Happens when an algorithm favors specific age groups over others, often as a result of unbalanced data representation. For example, job recruitment AI might prioritize younger candidates over older, more experienced applicants.
Socioeconomic Bias – Occurs when algorithms disproportionately represent individuals with economic advantages over those from lower-income backgrounds. This can be seen in AI-powered facial recognition or workplace monitoring systems that assume access to certain technologies.
Confirmation Bias – Happens when an algorithm reinforces pre-existing beliefs by continuously feeding users content aligned with their prior behavior. This is common in social media recommendation systems on platforms like YouTube, TikTok, Facebook, and Instagram.
Representation Bias – Arises when certain demographic groups are underrepresented in an algorithm’s training data, leading to inaccurate or unfair outcomes.
Concept Drift – Occurs when an algorithm fails to adapt to unforeseen events or changing circumstances. For example, an AI assistant predicting purchasing behavior may continue to suggest products from a previous location after a user moves to a new city.
Privacy Bias – Appears when individuals who opt out of data collection are excluded from algorithmic decision-making. For instance, an online shopping AI that personalizes recommendations may disadvantage users who decline data tracking.
Disability Bias – Happens when an algorithm does not account for users with physical or mental disabilities. For example, AI-driven emotion detection may misinterpret expressions from people with neurodivergent conditions, leading to inaccurate results in workplaces, classrooms, or vehicles.
Language Bias – Arises when an AI system fails to accommodate linguistic diversity, disadvantaging non-native speakers. An example is an automated grading algorithm penalizing students who write in English as a second language.
Regional Bias – Occurs when certain geographic areas are unfairly represented in data. For example, using ZIP codes to determine loan approvals without considering historical economic disparities can disadvantage marginalized communities.
Recency Bias – Happens when an algorithm prioritizes recent data over historical context, potentially leading to inaccurate predictions. An example is an AI system forecasting stock market trends based only on recent events, ignoring long-term patterns.
Please be aware of the impossibility of creating a comprehensive list of all the different biases that exist in the world. We are not fully aware of all the human, systemic, and computational biases out there. The list above includes the most common ones. If you think I have forgotten any, please let me know in the comments. Now, I would like to provide two examples:
Sasha Costanza-Chock’s book, Design Justice highlights how airport security systems, particularly millimeter wave scanners, encode gender bias by enforcing a rigid, binary understanding of gender. The scanning technology requires officers to select "Male" or "Female," and any deviation from the system’s binary, cis-normative model triggers an anomaly warning, subjecting transgender and nonbinary individuals to additional scrutiny and discomfort. This bias is not just technical but deeply embedded in sociotechnical systems, reflecting and reinforcing broader cisnormative assumptions in algorithmic design. The experience extends beyond gender, intersecting with racial and disability bias, as Black women, Muslim travelers, and disabled individuals also face disproportionate burdens from these flawed security protocols (Costanza-Chock, Design Justice, pp. 1-5).
The Strategeion hiring case illustrates algorithmic bias in recruitment, where an AI system (PARiS) unintentionally discriminated against a qualified candidate, Hara, due to representation bias and indirect disability bias. The AI was trained on past employees—mostly ex-military personnel—who had a strong correlation with athletic backgrounds. As a result, it learned to associate sports experience with being a "good fit," leading to the exclusion of candidates like Hara, who had no history of athletic participation due to lifelong wheelchair use. This demonstrates how biased training data can reinforce unfair hiring practices, even when protected attributes like disability status are not explicitly considered. The case highlights the risks of automation bias, where human HR teams, initially skeptical, placed increasing trust in the AI without questioning its decisions. This case study underscores the ethical challenges in AI-driven hiring and the need for bias mitigation strategies to ensure fairness and diversity in recruitment (Princeton AI Ethics Case Study, 2023).
AI Bias Mitigation Strategies
Before discussing it is important to emphasize the context of bias mitigation strategies. As we have seen with different types of bias, bias in AI can result from decisions made during data selection, feature selection, or other choices by the people deploying these systems. With sociotechnical problems, code and data alone are insufficient because the real challenges extend beyond system performance to how the system is used. The key questions are not just about efficiency but about power: Who benefits? Who is harmed? Who makes the decisions?
This video illustrates a powerful lesson on justice and moral responsibility. A teacher intentionally expels a student unfairly to demonstrate how people often remain silent when injustice does not directly affect them. The message emphasizes the importance of speaking up against wrongdoing, as justice can only survive when individuals take responsibility for protecting it.
The best strategy for mitigating algorithmic bias is to welcome different perspectives and worldviews into teams, create safe spaces for speaking the truth, establish whistleblowing systems, and take personal responsibility to be conscious of possible discrimination. If discrimination occurs, it is important to take accountability for it.
Having said this, I have attempted to categorize different bias mitigation strategies below:
1. Organizational and Cultural Strategies
Education and Awareness: Educating both developers, employees, and users about AI risks is crucial. This involves raising awareness of potential biases and identifying ways to mitigate them. It fosters a culture of understanding and proactive risk management.
Ethical Standards and Governance: Establishing ethical standards and governance frameworks ensures that organizations consider emerging regulations and potential risks. This proactive approach helps create a culture focused on ethical AI development and deployment.
Diversity of Thought: Incorporating diverse perspectives at all stages—development, testing, and implementation—helps identify and address potential harms. A diverse team can provide a broader range of insights into potential biases.
2. Transparency and Accountability
Transparency: Providing clear information about the steps taken to identify and mitigate risks helps stakeholders understand the organization's commitment to fairness and accountability.
Algorithm Audit: Conducting independent third-party audits can evaluate AI systems for bias, assess governance structures, and ensure transparency, providing assurance to stakeholders.
3. Testing and Monitoring
Ongoing Testing and Monitoring: Implementing metrics for bias, accuracy, and other technical aspects allows for continuous improvement. Regular testing with representative datasets helps identify areas needing improvement and ensures a more equitable AI system.
AI Risk and Impact Assessment: These assessments identify potential risks in socio-technical systems and relevant stakeholders who might be affected, helping to mitigate harm.
4. Explainability and Reproducibility
Explainable and Reproducible Results: Ensuring that AI systems provide understandable and reproducible outcomes helps organizations consider and test for various risks, enhancing trust and accountability.
5. De-Biasing Techniques
Pre-processing Techniques:
Reweighting the Training Data: Adjusts data weights to ensure balanced learning across groups.
Resampling the Data: Balances datasets through techniques like oversampling minority classes.
Synthetic Minority Over-sampling Technique (SMOTE): Generates synthetic data to address class imbalance.
Feature Selection/Modification: Removes biased features or creates new ones to reduce bias.
In-processing Techniques:
Regularization: Constrains learning to reduce sensitivity to bias.
Cost-sensitive Learning: Assigns different misclassification costs to reduce bias in predictions.
Adversarial Debiasing: Trains models to make fair predictions while an adversary tests for bias.
Post-processing Techniques:
Threshold Adjustment: Adjusts decision thresholds to equalize error rates across groups.
Calibration: Aligns model predictions with actual outcomes, reducing bias.
6. Fairness Approaches
Fairness through Awareness: Explicitly includes sensitive attributes to correct dataset biases.
Fairness through Unawareness: Ignores sensitive attributes, though this may not be effective if other features correlate with them.
Algorithmic bias is a complex and multifaceted issue that mirrors the biases inherent in human decision-making. As AI systems increasingly influence various aspects of our lives, it is crucial to recognize that these technologies are not inherently neutral. They can perpetuate and even amplify existing biases if not carefully managed. Addressing algorithmic bias requires a holistic approach that combines organizational strategies, transparency, continuous testing, and technical de-biasing techniques. By fostering a culture of diversity, ethical governance, and accountability, organizations can work towards creating AI systems that are fairer and more equitable. Ultimately, the responsibility lies with all stakeholders—developers, users, and policymakers—to ensure that AI technologies serve the broader goal of social justice and do not reinforce existing inequalities.

Ayşegül Güzel
is a trusted voice in AI governance and responsible AI, guiding organizations to develop ethical, regulation-ready AI systems. With a background spanning social entrepreneurship (as an Ashoka Youth Fellow), innovation consulting, and data science, she brings a rare, holistic perspective to building trustworthy AI. As the founder of AI of your choice, Ayşegül works as a consultant, trainer, speaker, and writer—merging technical expertise with deep social insight. A Certified AI Auditor (BABL AI) and TED speaker, she specializes in aligning AI development with human values and systemic impact. Fluent in English, Turkish, and Spanish, Ayşegül collaborates globally to shape AI that serves people, not just progress.
Comments