How to Address and Mitigate Bias in AI Product Design

Android robot and the word UX each sit in a scale pan

Jeff Mielke

October 25, 2024

As a UX designer passionate about the intersection of technology and human experience, I've witnessed firsthand the transformative power of AI solutions in our products. I’ve seen hours of research findings transformed into consolidated insights reports within a minute. I’ve seen 12 logo variations generated in seconds before my eyes. 

However, with this great accelerator, comes great responsibility. One of the most pressing challenges we face today is the presence of bias in AI systems — a problem that doesn't just skew data, but can significantly impact real lives.

Understanding the Root of AI Bias

Before we can tackle bias, it's crucial to understand where it originates. AI systems learn from data — data that is a reflection of our world, complete with its inequalities and prejudices. When this data is unrepresentative or historically biased, the AI models trained on it can perpetuate and even amplify those biases.

For example, in the healthcare industry, AI algorithms trained on biased or unrepresentative datasets can lead to inaccurate diagnoses for certain patient groups. For instance, facial recognition algorithms used to detect genetic disorders and AI models for detecting skin cancer often perform poorly on patients with darker skin tones because they are trained predominantly on light-skinned images. Similarly, chest X-ray reading algorithms trained primarily on male patient data are significantly less accurate when applied to female patients. 

In 2014, Amazon developed an AI recruiting tool to streamline hiring, but it revealed significant gender bias. The AI was trained on a decade’s worth of resumes, predominantly from male candidates due to the tech industry’s male dominance. Consequently, the system learned to favor male candidates and penalize resumes with words associated with women, like “women’s” in “women’s chess club captain,” and it even downgraded graduates from all-women's colleges.

The Eliza Effect: A Cognitive Bias in Human-Computer Interaction

Alongside bias, another subtle challenge in AI design is the Eliza effect — the tendency of users to ascribe more intelligence or human-like understanding to AI systems than they actually possess. Named after ELIZA, an early chatbot that simulated conversation by rephrasing user inputs, this effect can lead users to overtrust AI outputs, or misunderstand the limitations of AI technologies. The generative AI of today is not actually thinking, it is strategically putting together pieces of a complex puzzle to answer your question. Its end goal is to answer that question, regardless of what needs to be made up along the way. 

The Role of UX Designers in AI Product Design and Bias Mitigation

You might wonder, isn't bias in AI a data science problem? Although data scientists and engineers play a critical role, UX designers advocate for the user with a deep understanding of user needs, behaviors, and contexts. This user-centric perspective positions UX designers uniquely to identify potential biases, address the Eliza effect, and design solutions that are equitable, transparent, and trustworthy.

"UX designers have a moral obligation to prevent such outcomes."

Google Translate faced gender bias in its translations, often assigning male pronouns to traditionally male-associated professions and female pronouns to others when translating from gender-neutral languages like Turkish. UX researchers and designers identified this bias, stemming from the AI model's tendency to replicate gender stereotypes in its training data. Users often trusted the translations without question. To address these issues, Google’s UX team implemented gender-specific translation options, added visual cues to flag gendered terms, included educational icons explaining translation limitations, and integrated a feedback mechanism for user input. These interventions not only reduced gender bias but also educated users on AI’s limitations, illustrating the vital role of UX design in creating fairer, more transparent AI tools.

Implementing Transparent AI Interfaces: A UX Perspective Addressing the Eliza Effect in AI Design

Here are key strategies for mitigating bias and addressing the Eliza effect in AI design to create fairer, more transparent, and trustworthy AI interactions.

1. Inclusive Research and Testing

Ideally in order to mitigate bias, teams start by ensuring that user research includes a diverse cross-section of society. This means considering various demographics — age, gender, ethnicity, socioeconomic status, and abilities. By engaging with a wide range of users, unique needs, potential biases, and misconceptions are identified early in the design process. Remote usability tools, such as Lysnna, offer ways to screen a variety of users in a short period of time. 

2. Designing Transparent Interfaces

Users should understand how and why an AI system makes certain decisions. By designing interfaces that explain AI reasoning in clear, user-friendly language, AI products empower users and build trust. Transparency helps mitigate the Eliza effect by clarifying that AI systems have limitations and operate based on specific algorithms and data inputs.

An excellent example of transparency in AI integration is Zendesk, a customer service software company. Zendesk has implemented transparent AI practices in its customer experience tools. It provides clear explanations of how its AI-powered tools work and how AI decisions are made. 

By offering educational resources and documentation, Zendesk helps users understand AI's integration into customer service software. Their approach emphasizes explainability, helping users comprehend the impact of AI on customer interactions.

3. Setting Realistic Expectations

Avoid anthropomorphizing AI systems with human-like avatars or language that suggests consciousness or emotions. Use design elements that set appropriate expectations about the AI's capabilities. This helps prevent users from overestimating what the AI can do, reducing the impact of the Eliza effect.

4. Implementing Feedback Mechanisms

Wherever feasible, incorporate features that allow users to provide feedback on AI outputs. If a user feels that an AI recommendation is off-base, biased, or confusing, there should be an easy way for them to report this feedback. This not only improves the system over time but also encourages users to engage critically with AI outputs rather than accepting them at face value. 

This starts with offering thumbs up or down icons at the end similar to a ChatGPT request. But how can this be taken a step further? If the result is negative, getting an understanding of why can identify if a greater issue is at play.

5. Educating Users Within the Experience

Provide educational content that helps users understand how the AI works, including its limitations. Tooltips, tutorials, or accessible documentation can demystify the technology. When users are better informed, they're less likely to attribute undue intelligence to the AI and more likely to use it appropriately.

6. The Importance of Cross-functional Collaboration in Ethical AI Development

Work closely with data scientists and engineers to understand how AI models are trained and what data is used. Open dialogue can uncover biases and clarify the AI's actual capabilities, enabling you to design interfaces that accurately represent the technology.

7. Tools and Techniques for Detecting Bias in AI Models

Use available tools and frameworks designed to detect and reduce bias in AI models. Although these tools are often in the realm of data science, understanding and advocating for their use can make a significant difference in the final user experience. 

Tools like IBM's AI Fairness 360 and Google's What-If Tool assist in evaluating model behavior and fairness. The What-If Tool enables interactive exploration of model predictions, offering counterfactual analysis to see how changes in input data affect outcomes, performance insights to identify influential features, and fairness checks to explore prediction differences across groups. This interactive platform helps identify biases and understand the nuances and limitations of machine learning models in real time.

Humans play a critical role in bias detection through a method known as "human-in-the-loop." We recently applied this approach with machine learning models to streamline the title insurance process, significantly reducing transaction times and enhancing efficiency.

8. Championing Ethical Guidelines

Advocate for the adoption of ethical AI guidelines within your organization. By establishing clear standards and accountability measures, you create an environment where bias mitigation and user education are shared priorities.

9. Reflecting on Real-World Impacts

Consider the implications of biased AI in products like hiring platforms, loan approval systems, or healthcare diagnostics. A biased algorithm in these contexts doesn't just inconvenience users — it can lead to discrimination and unequal opportunities. Similarly, if users over-rely on AI due to the Eliza effect, they might make poor decisions based on misplaced trust. UX designers have a moral obligation to prevent such outcomes.

Moving Forward Together

Mitigating bias and addressing the Eliza effect in AI isn't a one-time task, but an ongoing commitment. It requires vigilance, empathy, and collaboration. By placing users at the heart of the design process and actively working to identify and address biases and misconceptions, AI products are created that are innovative as well as fair, transparent, and trustworthy.

Strive to design AI experiences that uplift and empower all users. After all, technology should be an enabler that enhances our human interactions, not impede and divide us.

Jeff Mielke

Design Director

Jeff Mielke serves as 8th’s Light’s Design Director, bringing a wealth of experience from his work with both prominent and emerging brands over the years. His expertise lies in tackling intricate challenges and infusing a user-centric approach to his design practices. His portfolio spans across a range of applications, from consumer-oriented to enterprise-level solutions.