AIGovHub
Vendor Tracker
ProductsPricing
AIGovHub

The AI Compliance & Trust Stack Knowledge Engine. Helping companies become AI Act-ready.

Tools

  • AI Act Checker
  • Questionnaire Generator
  • Vendor Tracker

Resources

  • Blog
  • Guides
  • Best Tools

Company

  • About
  • Pricing
  • How We Evaluate
  • Contact

Legal

  • Privacy Policy
  • Terms of Service
  • Affiliate Disclosure

© 2026 AIGovHub. All rights reserved.

Some links on this site are affiliate links. See our disclosure.

A

Aequitas

bias fairness testing

Chicago, ILFounded 20181-10 employees
7.0

Overall

7.5

Ease of Use

6.5

Features

9.0

Value

6.0

Support

Overview

Aequitas is an open-source bias and fairness audit toolkit developed by the Center for Data Science and Public Policy at the University of Chicago. Designed primarily for evaluating the fairness of decisions made by algorithmic systems, Aequitas provides both a Python library and a web-based interface that make bias auditing accessible to a wide range of users, from data scientists writing code to policy analysts using a browser-based tool. The toolkit focuses on assessing the fairness of binary classification and scoring models, which are the most common types of algorithmic decision-making systems used in high-stakes domains such as criminal justice, child welfare, healthcare, and education. Aequitas evaluates model predictions against multiple fairness criteria simultaneously, including statistical parity, false positive rate parity, false negative rate parity, false discovery rate parity, and false omission rate parity, providing a comprehensive view of how a model treats different demographic groups. One of Aequitas' distinctive features is its Fairness Tree, a decision framework that helps practitioners navigate the often-confusing landscape of fairness definitions and select the most appropriate metrics for their specific context. The Fairness Tree considers factors such as the nature of the intervention (punitive vs. assistive), the population being assessed, and the availability of resources, guiding users toward fairness definitions that are most relevant to their use case. Aequitas generates clear, visual bias reports that are designed to be interpretable by non-technical stakeholders including policymakers, program managers, and oversight bodies. This accessibility is a direct reflection of the project's origins in public policy research, where communicating findings to diverse audiences is essential. As an academic project, Aequitas is freely available under an open-source license and has been cited in numerous research papers and policy documents. However, it is important to note that Aequitas is maintained by a university research team rather than a commercial entity, meaning that updates and support are provided on a best-effort basis. The toolkit is well-suited for organizations conducting initial bias assessments or those in the public sector, but may lack the robustness and scalability needed for continuous production monitoring in enterprise environments.

Frameworks Supported

NIST AI RMF

Compliance & Security

SOC 2 Certified
ISO 27001 Certified
GDPR Compliant
DPA Available

Pros

  • Free and open source with both Python library and web-based interface
  • Academic rigor with Fairness Tree framework for selecting appropriate metrics
  • Clear visual bias reports designed for non-technical policymakers
  • Strong track record in public sector and social impact applications

Cons

  • Academic project with best-effort maintenance and limited dedicated support
  • Basic web UI lacking enterprise features like user management and audit trails
  • Focused primarily on binary classification, less suited for complex ML systems

Pricing

free
Starting at Free
Free Trial/Tier Available

Some links on this page may be affiliate links. This means we may earn a commission if you make a purchase, at no additional cost to you. See our affiliate disclosure. Last verified: February 2026