Combatting Bias in AI Systems

Combatting Bias in AI Systems

Jeff Mielke

April 03, 2025

One of the greatest threats to AI systems is bias. Bias in AI can result in automated systems making discriminatory and unfair decisions, which may harm specific groups of people or lead to their mistreatment without anyone knowing it. These discriminations may judge people based on race, culture, gender, and other factors.

Many important systems in our daily lives use AI-powered tools, including many crucial areas, such as HR, healthcare, and finance. In fact, an estimated 99% of Fortune 500 companies now use some form of AI in its hiring process. Ensuring these systems treat everyone equally is essential. 

This is particularly important for organizations leveraging AI technology. If someone finds biases in AI systems and makes them public, the company can suffer serious reputational damage.

AI designers must understand what bias in AI looks like, why it is essential to review AI systems for bias, and implement ways to mitigate these biases from their AI models.

What Does Bias in AI Look Like?

Bias in AI originates from the data used to train the system. If this data is unrepresentative or historically biased, AI systems trained on the data will perpetuate or amplify these biases. Human feedback, such as consistently providing recommendations that align with someone’s existing beliefs, may contribute to the presence of bias.

How bias is defined may also depend on company strategy. One company may define bias in AI differently than another because it has different concerns about its products.

For example, Apple’s AI product strategy focuses on differentiation, while Walmart’s focuses on transaction efficiency. Since these companies have different concerns for their AI systems, they will prioritize different biases.

Bias in AI also can be particularly dangerous if paired with anthropomorphic fallacies.

Anthropomorphic Fallacies in AI

Anthropomorphic fallacies are errors in attributing human characteristics to things that are not human. This is extremely dangerous for AI systems that may have biases. Human users can unconsciously absorb these biases from the AI system, and these biases may persist even after they stop using the AI program. Two fallacies often seen with AI systems are the Eliza Effect and AI exceptionalism.

With the Eliza Effect, people mistakenly believe that an AI system has human thoughts and feelings, overestimating the system’s overall intelligence. Researchers named this phenomenon the Eliza Effect after an early chatbot mimicked therapy conversations in natural language.

One example of the Eliza Effect is if a user feels they are having a deep conversation with a customer service chatbot, even though the chatbot is following its preprogrammed responses. Overestimating an AI system’s intelligence can lead to an excessive level of trust, which can be dangerous if the system gets things wrong.

Another anthropomorphic fallacy that risks bias in AI systems is AI exceptionalism, the belief that AI is superior to human capabilities. If a user believes that an AI system can’t have the same biases that humans have, it could lead to biases present in that system being amplified.

Creating the perception of intelligence or awareness beyond what the AI system actually possesses can be considered a form of bias in how users interact with AI systems. Knowing what bias looks like in AI models, AI developers need to ensure their AI systems are free of bias. 

Why It Is Important to Review AI Models for Bias

Without checks and balances to prevent AI biases, this technology could mistreat people in important services or processes.

Ethical Considerations in AI Bias

Biased AI systems can perpetuate social inequities by making unfair decisions against certain groups of people, leading to negative consequences like discrimination or loss of trust. 

Recently a study discovered language models that perpetuated systematic racial prejudices against African Americans based on their dialect. Another example, specifically of gender bias, occurred when Amazon’s AI-powered recruiting engine strongly favored male candidates over female candidates.

Other examples of bias in AI that could be considered unethical include:

  • Racial bias: Racial bias occurs when certain racial groups are favored over other racial groups.
  • Prejudice bias: Prejudice bias is present when the training data reflects existing prejudices, stereotypes, and societal assumptions.
  • Cultural bias: When an AI system reflects the cultural norms and values of the group that designed or trained it, potentially leading to misunderstandings or exclusion of other cultural groups, it is perpetuating a cultural bias.
  • Gender bias: Gender bias is the unfair favoring of one gender over the other, often reflecting societal stereotypes and biases in the data used to train AI systems.
  • Societal bias: Societal bias is the reflection of broader societal biases, such as racism or sexism, within AI systems, often stemming from biased data or biased human decisions in the development process.
  • Stereotyping bias: Stereotyping bias occurs when an AI system reinforces harmful stereotypes.

Without laws overseeing AI to hold companies responsible, developers must ensure their AI models are free of biases and inclusive.

Ensuring AI Models Are Free of Bias

Confirming that AI systems are free of bias can ensure that people of different groups will be treated fairly. Bias can be mitigated by doing the following:

  • Ensuring the data being used to train the AI model is free of biases. Cleaning the data that developers use to train AI systems can ensure that biases do not become prevalent.
  • Incorporate an inclusive design process. This means recognizing exclusion, learning from diversity, and using diverse datasets.
  • Conducting regular audits. Regular audits of the AI system’s performance on people of different demographics helps to identify potential bias.
  • Bias testing. Developers should evaluate AI systems against known benchmarks to detect disparities in outcomes across different groups.
  • Diverse feedback. Collecting feedback from diverse users will help identify potential biases against specific groups.
  • Implementing frameworks that emphasize fairness, accountability, and transparency. Notable companies like Google and IBM have started using these frameworks. They have created principles and ethical codes to reduce bias in their AI systems.


In the context of the Eliza Effect and AI exceptionalism, in particular, it is vital to design your AI systems with an understanding that users may overestimate the system’s capabilities, including accounting for harmful biases.

Transparency is key to addressing bias. Companies should openly share how they develop their AI systems and how they address potential biases. Examples of transparency include using prompts, disclaimers, or guidance to clarify the system’s limitations.

Those developing AI-powered systems must recognize biases in their AI systems, do regular reviews for bias, and mitigate any biases they find. Mitigating bias in AI systems ensures everyone can take advantage of the benefits of leveraging AI-powered systems.

Mitigating bias in AI extends to just about every instance of AI. Learn how UX designers address the Eliza effect and mitigate bias in AI product design.

Jeff Mielke

Design Director

Jeff Mielke serves as 8th’s Light’s Design Director, bringing a wealth of experience from his work with both prominent and emerging brands over the years. His expertise lies in tackling intricate challenges and infusing a user-centric approach to his design practices. His portfolio spans across a range of applications, from consumer-oriented to enterprise-level solutions.