Data-driven decision-making techniques are successfully implemented in applications like hiring, assessing recidivism risks, loans, display advertising, and more. On the negative side, some of these methods have been shown to discriminate against disadvantaged minorities. Consequently, scientists have developed ways to enforce fairness constraints that should potentially reduce this discrimination. Nevertheless, enforcing fairness constraints could ironically make the disadvantaged group worse off.
Data-driven decision-making techniques are successfully implemented in applications like hiring, assessing recidivism risks, loans, display advertising, and more. On the negative side, some of these methods have been shown to discriminate against disadvantaged minorities. Consequently, scientists have developed ways to enforce fairness constraints that should potentially reduce this discrimination. In this explainer, we demonstrate how enforcing fairness constraints could ironically make the disadvantaged group worse off.
We first present the issue of algorithmic unfairness and why it may occur. We consider all the stakeholders of the algorithm and describe why and how it can be unfair. Then, we take an economic point of view and illustrate that fairness constraints can adversely affect the group they try to benefit. Finally, we discuss the critical role of policy-makers in the process of devising fairer algorithms, both for the public and private sectors.
Introduction
Algorithmic decision-making is gaining massive popularity and plays a fundamental role in many facets of our lives. Criminal justice, health, online advertising, banking, job hiring, and college admission are just a few examples. Such applications rely on vast amounts of collected data, which are processed by state-of-the-art machine learning algorithms. Since these algorithms can discover hidden patterns and structures in data, we expect them to outperform human decision-makers. However, with the abundance of applications in which algorithms operate, concerns about their ethics, fairness, and privacy have emerged.
How Can an Algorithm Be Unfair?
At first glance, algorithms may seem free of human biases such as sexism or racism. However, in many situations, they are not. For instance, Amazon's hiring algorithm was deemed unfair and discriminatory against women, and the nonprofit organization ProPublica claimed that the COMPAS recidivism algorithm discriminates based on race. Algorithmic discrimination has many causes, including biases in the dataset used for learning the decision rule, biases caused by missing data, biases resulting from algorithmic objectives and more.
In order to provide an intuitive explanation of why unfairness occurs, we first must understand the problem a given decision-making algorithm is trying to solve. In the lending scenario, there is a bank that aims to maximize its revenue. It gets revenue for every successful loan, namely a loan that the borrower will fully pay back, and suffers a cost for every unsuccessful loan. The bank uses historical information to learn the best decision rule: Given individual attributes, should the bank approve the loan?[1] From the bank's standpoint, the best decision rule is the one that maximizes its revenue.
Analyzing the bank's decision rule, we might discover several undesired effects. For the analysis, we divide the population of potential borrowers into two sub-populations, often by a protected attribute like gender, race, etc., and analyze each sub-population separately. An undesired effect could be that one individual received a loan while another individual with identical attributes who belongs to the other sub-population did not receive it. Or that there is a wide gap between the percentage of those receiving loans in one sub-population compared to other sub-populations. There could also be a difference in the relative number of false positives in each sub-population, i.e. individuals who were predicted creditworthy but did not return the loan, indicating that the bank is willing to take more risks on one sub-population.
A natural way to describe unfairness is to define a fairness measure, such as the those described above, and show a discrepancy (namely, discrimination) between the groups.[2] The sub-population that suffers from this discrepancy is called the disadvantaged group, and the other sub-population is called the advantaged group. It is important to note that group membership (e.g., race) is not necessarily recorded in the bank's database, but is correlated with other attributes such as home address, income, etc. Group membership is a tool that analysts use to look for discrimination.
What is a Fair Algorithm?
To remedy possible discrimination, computer scientists have proposed various fairness measures and accompanying algorithms. The revised algorithms compromise statistical success in order to achieve fairer treatment of sub-populations or individuals. For instance, a modified version of the bank's lending algorithm will be less accurate and give more loans to uncreditworthy individuals. Still, it will equalize (or almost equalize) the fairness measure between the two groups. That is, the algorithm will grant more loans to people who it predicts are less likely to return the loan but it will, in a sense, treat the sub-populations similarly.
How does imposing fairness constraints affect the stakeholders of the decision-making algorithm (the bank and the potential borrowers)? To achieve greater fairness, the bank will have to be less profitable, at least in the short-term. We also expect the bank to give more loans to the disadvantaged group and potentially fewer loans to the advantaged group. This change will balance the fairness measure of the two groups. Crucially, the bank is a strategic entity, so even if we enforce fairness constraints, we still expect it to maximize its revenue given the new regulations.
Can Fairness Harm the Disadvantaged Group?
Ironically, imposing fairness constraints to help the disadvantaged group may not always be to its advantage. To describe possible harms, we need to consider the population's social welfare (or simply, welfare), which is the sum of utilities of all individuals. Any fairness constraint that ends up decreasing the welfare of the disadvantaged group harms this group – the individuals in the group will be, by definition, worse off after imposing the fairness constraint.
A key observation is that equalizing a fairness measure between the two groups is equivalent to selecting a welfare function and equalizing it. For instance, assume that the utility of every individual who receives the loan is 1 and 0 otherwise. In such a case, equalizing the welfare (sum of utilities) between the two groups is precisely equalizing the proportion of loans received in each group, a fairness measure we mentioned before. Consequently, imposing a fairness constraint leads to increased assumed welfare of the disadvantaged group. We emphasize the word "assumed" because this welfare is associated with the regulator's fairness measure.
But the assumed welfare can be very different from the underlying welfare that represents the actual tastes and preferences. The underlying welfare is the welfare that the regulator cares about, and the assumed welfare is just a proxy. This proxy has a fundamental failure: Notice that the fairness measures we described before are all given in the statistical language, for instance, equalizing the proportion of loans or the proportion of false positives. However, the underlying welfare can be much more complex. For example, potential borrowers may differ in their taste for getting the loan depending on their access to alternative sources of money or their disutility if they do not pay the loan back.
Naturally, a slight discrepancy between underlying and assumed welfares is innocuous. However, a severe mismatch can decrease the (underlying) welfare of the disadvantaged group [1]. At the same time, the regulator, who considers the assumed welfare and not the underlying welfare, will proudly state it supports the disadvantaged group while actually harming it!
Another possible harm stems from the delayed impact of fairness constraints. Fairness constraints should ideally improve the aggregate creditworthiness of the disadvantaged group. However, lending to uncreditworthy individuals, an inevitable effect of imposing fairness constraints, can lead to stagnation or decline in the aggregate creditworthiness of the disadvantaged group [2].
How to Avoid Harmful Fairness
The "Ban the Box" policy, adopted by several US states, prevents employers from conducting criminal background checks until late in the job application process. The goals of the policy are improving employment outcomes for those with criminal records and reducing racial disparities in employment. However, as Doleac and Hansen show, in the absence of an applicant's criminal history, employers statistically discriminate against demographic groups that include more ex-offenders.
We can learn a lot from the "Ban the Box" policy and its implications. Indeed, the ultimate goal of fairness constraints in machine learning is to promote disadvantaged groups. However, in many cases, fairness constraints are defined in a statistical language by computer scientists, who have been unwittingly cast as partial social planners [3]. The regulator's intervention should aim for welfare maximization, which is complex and elusive, and not necessarily what seems fair at first glance. Making disadvantaged groups and individuals better is not a computational problem, so we should not expect a computational solution. Large-scale field experiments, collaborations between computer scientists and economists, period checks and tracking, and more are required to ensure that data-driven algorithms benefit disadvantaged minorities and do not harm them.
Further Reading
[1] Ben-Porat, O., Sandomirskiy, F., & Tennenholtz, M. 2021. “Protecting the Protected Group: Circumventing Harmful Fairness.” Proceedings of the AAAI Conference on Artificial Intelligence, 35(6), 5176-5184. (link)
[2] Liu, L., Dean, S., Rolf, E., Simchowitz, M., & Hardt, M. 2018. “Delayed Impact of Fair Machine Learning.” International Conference on Machine Learning, 3156–3164. (link)
[3] Hu, L, & Chen, Y.. 2020. “Fair Classification and Social Welfare.” Conference on Fairness, Accountability, and Transparency (FAT* ’20) (link)
[4] Pessach, D, & Shmueli, E. “Algorithmic fairness.” arXiv preprint arXiv:2001.09784 (2020). (link)
[5] Chouldechova, Alexandra, and Aaron Roth. "The Frontiers of Fairness in Machine Learning." arXiv preprint arXiv:1810.08810 (2018). (link)
[1] This is a simplified description of the problem banks face, but one that accurately describes the foundation of unfairness. We use this abstraction since the explainer is aimed at a wide audience.
[2] The way we measure fairness obviously determines whether an algorithm is fair or not. It is, however, important to agree on how to measure it. For a survey of some of these measures, see [4].
The opinions expressed in this text are solely that of the author/s and do not necessarily reflect the views of the Heinrich Böll Stiftung Tel Aviv and/or its partners.