What is Algorithmic Bias?

This post is part 1 of the KITE blog series on algorithmic fairness.

Blog post by Otto Sahlgren

The writer is a Doctoral Researcher in Philosophy at Tampere University. His research focuses on ethics of AI, philosophy of discrimination and algorithmic fairness.

For anyone who has kept an eye on the discussion on artificial intelligence (hereby ‘AI’) and machine learning (‘ML’) in the media, it is no news that these systems can be biased, unfair and possibly lead to unlawful discrimination and other adverse impacts in their use. While AI can certainly improve efficiency and consistency, optimize processes and complement human decision-making, the narrative of AI as a quick fix for human bias in decision-making is quickly losing its footing.

This blog post is the first part in a series considering algorithmic fairness from both a technical and a philosophical perspective. It is meant to serve industry practitioners as well as those with no technical background but with an interest in ethics. Part 1 considers a central question underlying discussions on fair AI: what is algorithmic bias?

Key take-aways:

  • Algorithmic bias is ubiquitous in all areas of AI applications from recommender systems to autonomous vehicles.
  • Bias can be understood in at least three different ways each of which relate to questions of fair and equal treatment of individuals in the use of AI systems.

 

Fairness as a key question in AI

This century has witnessed a re-emergence of the AI hype, accompanied by promises of more objective and impartial decision-making processes with the help of automated data-analysis. Algorithms never tire, get hungry, give into emotion or prejudice, the promise goes, making them seemingly ideal for decision-making purposes. The same cannot be said about us imperfect humans. The ostensible illusion of algorithmic objectivity is misleading, however. An algorithm is objective in the same sense a recipe for apple pie is objective – in terms of providing consistency in action. But even consistent practices may be unethical or unjust (or result in bad apple pie).

Fairness has been identified as a key issue in ethics of AI due to increased awareness of so-called algorithmic bias. Fairness can be characterized, roughly, as making decisions in a non-discriminatory and just way while simultaneously recognizing legitimate differences between individuals, such as their merits and needs. Bias in AI systems may result in violations principles of fairness. As we will see below, however, bias is deeply woven into the fabric of AI. In a very deep sense, the question of what algorithmic bias is cannot be considered separately from the question of what constitutes fairness.

 

What is algorithmic bias?

Defining algorithmic bias is no easy task. Some examples might help illuminate the concept:

  • In natural language processing, word embeddings can exhibit gender bias, such as gendered stereotypes and patriarchal notions of gender-roles (Bolukbasi et al. 2016; Caliskan et al. 2017);
  • Search engines have been shown to amplify racial biases and stereotypes, being more likely to return derogatory content when keywords include terms such as ‘black girls’ (Noble 2018);
  • The COMPAS algorithm used by U.S. judges in predicting whether defendants will go on to re-offend (or, more specifically, whether they will get re-arrested by the police) overestimates recidivism risk for black defendants and underestimates it for white defendants (Angwin et al. 2016);
  • Commercial facial recognition algorithms fail to recognize black persons – black women, in particular – at higher rates in comparison to white persons (Buolamwini & Gebru 2018).

These examples merely scratch the surface. The lesson learned from a thriving field of research has been that bias is not a bug, it is a feature (paraphrasing Kate Crawford); it is a part of the normative core of AI technology.

There are many ways AI can be laden with prejudice and exhibit discriminatory patterns. Some biases are subtle, some more severe. Bias can be a result of skews in the training examples used to teach AI systems, mis-specified target variables, among other things. Some uses of AI are biased through and through (think of mass-scale biometric recognition technologies used for genocidal purposes, for example). At the end, fairness is a question of what (and whose) values are embedded into the technology, who are excluded as a result, and whether this is justified in eyes of the law and in light of moral principles.

Conceptual challenges notwithstanding, it is perhaps useful to distinguish at least three senses in which one might understand the term “algorithmic bias”. This threefold distinction comes from work by Danks & London (2017).

Statistical Bias

One might, first, conceive algorithmic bias in terms of statistical representation. This is what AI practitioners are, perhaps, most familiar with. When an algorithm is trained on unrepresentative or inaccurate data, it may result in higher rates of errors (e.g., misclassification and misprediction rates) or otherwise worse outcomes when used on members of the group which are underrepresented. In other words, the model learned by the algorithm does not capture the true population values. Call this statistical bias.

Statistical bias is a result of under- and overrepresentation. For instance, ‘Labeled Faces in the Wild’, a dataset used widely for training and benchmarking image recognition systems, contains over 15000 images of faces with only 7 % of them belonging to people of color (e.g., Han & Jain 2014). Underrepresentation in training data may lead to higher error rates because the algorithm does not learn what features are predictive of a given outcome equally across different factors or groups.

The issue of overrepresentation, on the other hand, is pertinent in the case of COMPAS (see above): the algorithm’s tendency to overestimate re-arrest risk for black defendants was partly a result of the relatively greater representation of black persons in the arrest data.

Now, the reader should note that disparities in data representation may accurately capture population values, but they may also result from biased or otherwise faulty data collection practices. For example, the disparities in U.S. arrest data representation are not meaningfully understood without accounting for structural inequality (e.g., poverty, unequal access to education etc.) and bias in policing practices (e.g., disproportionate policing of persons of color).

Human decisions determine what data is captured and how and humans also interpret what the data stands for. The take-away here is that when it comes to complex social phenomena, designers should be wary of taking data as unquestionable ground truth.

Legal Bias

One might also understand bias as a property of algorithms (or their use) which involves a violation of legal norms prohibiting discrimination. Call this legal bias.

Take the case of the credit institution company Svea Ekonomi (detailed here). In 2018, Svea Ekonomi was found guilty of multiple discriminations based on the Non-Discrimination Act. The company uses statistical methods for credit-scoring decisions and their data showed that, statistically speaking, men have more failure payments than women. Similarly, those whose native tongue was Finnish defaulted more often in comparison to native Swedish speakers. As a result, the system produced a lower credit-score in case an applicant was male and when the applicant’s native tongue was Finnish. Section 8 in the Non-Discrimination act and section 8 of the Act on Equality Between Men and Women prohibits discrimination on the basis of such factors.

The problem with data mining and ML is that unfair treatment (perhaps even unlawful discrimination) may not require the explicit use of suspect variables such as ‘gender’. Bias may creep in through the use of proxies for such attributes – sometimes even unbeknownst to the model developers. For example, COMPAS does not use attributes such as ‘race’ in generating risk scores for defendants. In many cases, other seemingly benign variables such as ZIP codes may be predictive of an individual’s ‘race’ or socio-economic status (see Barocas & Selbst 2016), thereby resulting in differences in output distributions that follow those lines.

As we will see in subsequent posts in this series, finding a computational or purely formal solution for detecting unlawful discrimination and unfairness remains a challenge. Yet designers and developers must consider whether their technology could inadvertently engage in or incentivize discrimination in actual use-contexts in some way, even if no suspect attributes (e.g., gender) are used in ML models.

Moral Bias

Finally, one might understand algorithmic bias as a deviation from certain moral principles related to equality (regardless of whether this is wrongful in the eyes of the law). Call this moral bias.

For example, an algorithm might not be statistically biased in the sense that it would not capture actual population values – i.e., the model might be accurate in the statistical sense. An algorithm might also not be biased in the legal sense that it does not violate legal norms prohibiting discrimination. But the algorithm might yet (re)produce structural inequalities or reinforce stereotypes that one might find as violating common ideals of equality. The examples of gendered language-models (see above) are good examples of such bias; they reflect (accurately) uses of language that reinforce gender stereotypes.

Algorithms may also generate demeaning and psychologically harmful results when they act in ways that bear a discriminatory social meaning – a case in point is Google Images labeling pictures of black persons as ‘gorillas’. Other examples can be found in an earlier KITE blog post.

The reader should note that there may be cases where these different biases overlap. An algorithm that violates anti-discrimination laws is plausibly also morally corrupt, and it may also exhibit statistical bias, but not necessarily.

The threefold categorization (statistical, legal and moral bias) may nevertheless prove useful for designers and developers in terms of thinking about bias in their AI systems and how the technology might contribute to discriminatory and otherwise wrongful conduct. Ethics in AI may require more than statistical accuracy and compliance with legislation.

 

Bias is ubiquitous

It is important to emphasize here that bias is ubiquitous in all areas of AI. Recommender systems which are used to recommend content on online platforms, for example, are trained on data that aims to capture individuals’ preferences in some way (e.g., click-rates). Preferences may vary across groups – not all Spotify users listen to pop music and mustache combs are likely to be purchased by men. Variance in preferences may be explained by individual taste as well as normative social expectations and behavioral patterns (e.g., socialization) that are morally problematic. Recommender systems may sustain and amplify these (non-)existing preferences – think of filter bubbles.

Similarly, autonomous vehicles cruise in locations with different demographics (e.g., urban and rural areas) and under different environmental conditions (e.g., weather). Training determines what they learn and don’t learn – if the vehicle has not been trained for slippery conditions, it may not perform well when there’s black ice on the road.

 

Ways forward?

Industry practitioners want their models to generalize from limited information to more complex contexts while avoiding adverse outcomes. Thus, bias should be a central concern in terms of ethics as well as general system performance. The ‘fair ML’ community has tackled the problem of bias by (1) presenting formal, mathematical and statistical metrics for ‘measuring discrimination’ and fairness in algorithms and (2) developing methods for mitigating bias.

In this blog series, we will focus on fair machine learning from both a technical and a philosophical perspective.

 

References

  • Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016, May 23). Machine bias: There’s software used across the country to predict future criminals. And it’s biased against blacks. ProPublica. Retrieved from https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.
  • Barocas, S., & Selbst, A. D. (2016). Big data’s disparate impact. California Law Review, 104, 671.
  • Bolukbasi, T., Chang,, K.-W., Zou, J., Saligrama, V., and Kalai, A. (2016). Man is to computer programmer as woman is to homemaker? debiasing word embeddings. Proceedings of the 30th NIPS (NIPS’16), pp. 4356-4364.
  • Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. Conference on fairness, accountability and transparency, pp. 77-91.
  • Caliskan, A., Bryson, J. J., & Narayanan, A. (2017). Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334), pp. 183-186.
  • Danks, D., & London, A. J. (2017). Algorithmic Bias in Autonomous Systems. IJCAI, pp. 4691–4697.
  • Han, H. & Jain, A. K. (2014). Age, Gender and Race Estimation from Unconstrained Face Images. MSU Technical Report.
  • Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. New York University Press.