-This page is part of the Responsible AI series.

  1. Introduction
  2. List of bias
  3. Bias Metrics

Introduction

Bias and fairness are critical considerations when it comes to artificial intelligence (AI) systems. In the context of AI, bias refers to the systematic errors or deviations from an accurate or fair outcome that can occur in the development and deployment of AI algorithms.

Bias

Bias can manifest in different forms, with statistical bias and ethics bias being two distinct aspects:

Recognizing and addressing both statistical bias and ethics bias are essential for ensuring fairness in AI systems. Mitigating statistical bias requires careful data preprocessing, ensuring representative and diverse training datasets, and employing techniques such as bias correction and fairness-aware learning. Ethics bias, on the other hand, calls for critical reflection, ethical frameworks, and stakeholder engagement throughout the AI development lifecycle to identify and challenge the underlying assumptions and values shaping the system's behavior.

Achieving fairness in AI is a multifaceted endeavor that requires a holistic approach. It involves a combination of technical solutions, regulatory measures, and ethical considerations to promote transparency, accountability, and inclusivity. By addressing statistical bias and ethics bias, we can strive towards AI systems that treat individuals equitably, avoid discrimination, and contribute to a more just and unbiased society.

List of Bias

6b3d7b0afe55ea7a2c0b356768e65599.png More in details 4a4762a4cc72f87fdc7fe5c01c3d0114.png

Cognitive Bias

Inductive Bias

Technical bias

Omitted Variable Bias

Bias-variance tradeoff


Bias metrics

Here is a list of metrics to do bias detection and to force the system to be "fair" according to these metrics.

Equal performance

Equal performance refers to the assurance that a model is equally accurate for patients in the protected and non-protected groups. Equal performance has 3 commonly discussed types:

Equalized Odds

489f824bcadd2b751c3469d9907ac740.png where A is the sensitive feature and R is a binary output {+,-}. The concept was originally defined for binary-valued Y, but in 2017, Woodworth et al. generalized the concept further for multiple classes.

Use Cases

Sources

Predictive Parity

ca2d7eefa6f3aec747ffc436ad245f8a.png where A is the sensitive feature and R is a binary output {+,-}. Predictive parity ensures that the predicted positive outcome has the same precision across different groups.

Use Cases

Demographic Parity

Demographic parity or Statistical parity (also referred as acceptance rate parity or benchmarking) refers to the property of a classifier where he subjects in the protected and unprotected groups have equal probability of being assigned to the positive predicted class. This metrics consider only the predicted outcome, not the actual outcome. 38e06259af3839cfbad5acd87455533e.png where A is the sensitive feature and R is a binary output {+,-}.

Use Cases

Treatment equality

Treatment equality focuses on balancing the ratio of false positives to false negatives across different groups. A classifier satisfies this definition if the subjects in the protected and unprotected groups have an equal ratio of FN and FP, satisfying the formula: aa3c9fe06e5a0d8a09331524f7998a2a.png

Use Cases

Mitigation of bias

Bias mitigation in classification

Fairness

Libraries

1. Fairlearn

Fairlearn is an open-source Python library developed by Microsoft. It provides tools for assessing and mitigating unfairness in machine learning models. Fairlearn offers both fairness metrics and algorithms for reducing bias. It also includes visualization tools to help interpret fairness metrics and mitigation results.

Tools

LLMs

The specific page for RAI in LLMs can be found here RAI LLM.

Emerging topics

Other readings

To be controlled

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3547922 https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3792772 https://fairware.cs.umass.edu/papers/Verma.pdf https://www.holisticai.com/blog/holistic-ai-library-tutorial https://arxiv.org/pdf/1801.07593.pdf