ACM FAT* Tutorial: Disambiguating Bias and Unfairness in Algorithmic Products

At the ACM Conference on Fairness, Accountability and Transparency (ACM FAT*) later this month I will be presenting a tutorial about why it is philosophically and organizationally important to carefully distinguish between algorithmic bias and algorithmic unfairness. Unfortunately, in English the term “bias” has multiple meanings which are in direct conflict when discussing the ethics of algorithms. In a liberal democratic context, “bias” carries a negative tone that is fundamentally an implicit claim about the unfairness of pre-judgment. However, in statistical uses, “bias” occurs when a model diverges from reality by a large enough margin to interfere with the utility of the model. In other words, statistical bias is a feature of technical systems whereas unfairness is feature of human systems.

While terminological precision has its own value, this confusion has real consequences for organizations that build and deploy algorithmic decision tools. With the proliferation of tools to diagnose algorithmic bias, it is possible for an organization to invest in addressing bias without building the capacity to grapple with the challenging question of how they conceive of what is fair. Furthermore, it turns out that quite a few algorithmic fairness problems require introducing statistical bias (or at the very least, a trade off with accuracy). Because it is not possible to resolve fairness questions through the simple unbiasing of datasets, organizations that build and deploy algorithmic tools need the capacity for addressing both types of problems, yet investments in the capacity to deliberate about fairness are lagging behind.

I will be posting more on this topic here and on Medium in the near future, please check it out.

The accepted abstract for my FAT 2019 tutorial: fat2019tutorials-paper26

Scroll to top