A comprehensive set of fairness metrics for datasets and machine learning models, explanations for these metrics, and algorithms to mitigate bias in datasets and models.
-
Updated
Dec 10, 2024 - Python
A comprehensive set of fairness metrics for datasets and machine learning models, explanations for these metrics, and algorithms to mitigate bias in datasets and models.
WEFE: The Word Embeddings Fairness Evaluation Framework. WEFE is a framework that standardizes the bias measurement and mitigation in Word Embeddings models. Please feel welcome to open an issue in case you have any questions or a pull request if you want to contribute to the project!
Toolkit for Auditing and Mitigating Bias and Fairness of Machine Learning Systems 🔎🤖🧰
NeurIPS 2019 Paper: RUBi : Reducing Unimodal Biases for Visual Question Answering
[ICML 2022] Channel Importance Matters in Few-shot Image Classification
Estimation and inference from generalized linear models using explicit and implicit methods for bias reduction
Methods for M-estimation of statistical models
This repository contains the experiments conducted in the ICLR 2022 spotlight paper "On the Importance of Firth Bias Reduction in Few-Shot Classification".
Bias reduction in quasi likelihood estimation
Bias correction command-line tool for climatic research written in C++
Bias detection Toolkit: Chrome Extension, Python Package, SOTA research paper docs.
Tensorflow implementation of Learning Not to Learn (CVPR 2019)
This repository contains the firth bias reduction experiments on the few-shot distribution calibration method conducted in the ICLR 2022 spotlight paper "On the Importance of Firth Bias Reduction in Few-Shot Classification".
Pytorch implementation of 'Explaining text classifiers with counterfactual representations' (Lemberger & Saillenfest, 2024), ECAI 2024 - 27th European conference on AI
Sampling algorithms and machine learning models to reduce bias and predict credit risk.
A small and simple prototype designed to alert users of the bias of the news source.
Location-adjusted Wald statistics
Critical questions to help you gain useful information, clarify the context, figure out the pain points, and overcome biases.
unbiased toxicity detection from comments
A method to preprocess the training data, producing an adjusted dataset that is independent of the group variable with minimum information loss.
Add a description, image, and links to the bias-reduction topic page so that developers can more easily learn about it.
To associate your repository with the bias-reduction topic, visit your repo's landing page and select "manage topics."