What does it mean for AI systems to respect human autonomy?

Otto Sahlgren and Arto Laitinen

This blog post discusses the following questions: What is human autonomy? What does it mean for an AI system to respect it? How can AI systems fail in this regard? And how should designers approach autonomy as an area of ethical concern in the design and use-phases of autonomous systems? The post is based on a recent article by Arto Laitinen and Otto Sahlgren published in Frontiers of Artificial Intelligence (2021).

Human autonomy and ethics of AI

In debates around AI ethics, human autonomy remains a peculiar theme: it is often mentioned yet rarely discussed in detail. “[A] complete picture of the moral salience of algorithmic systems”, however, “requires understanding algorithms as they relate to agency, autonomy, and respect for persons” (Rubel, Castro, and Pham 2020). While AI ethics is not solely about autonomy, autonomy is relevant in surprisingly many ways, partly because it is significantly intertwined with other valuable things, such as privacy (Lanzing 2016; 2019), transparency (Rubel et al., 2021), and human dignity (Riley & Bos, 2021).

But what is autonomy?

Human autonomy is notably something more than so-called “functional autonomy” (the capacity to operate independently, without external agents’ control). Animals from bees to buffaloes as well as some machines (e.g., robot vacuum cleaners) are clearly functionally autonomous. But the more demanding notion of human autonomy as self-determination or self-rule requires an adequate degree of control over one’s instincts and impulses (Kant, 1996). This is what most animals (and arguably technologies) lack. Human autonomy is tied to the practical rationality that humans are capable of – the capacity to assess reasons for action and to pursue things that are taken to be of value

A full picture of autonomy requires consideration of its dimensions and prerequisites. Autonomy has different dimensions, which we can tentatively characterize as follows: First, there are potentials and capacities, on the one hand, and acts and exercises, on the other. Human adults, for example, have the capacity to rationally reflect on their values and goals, and to pursue them, thanks to certain cognitive and physical qualities. When they put these capacities to use, they act autonomously; they exercise their autonomy. Second, feminist philosophers have also long emphasized the relational notion of autonomy (see MacKenzie & Stoljar, 2000; Stoljar, 2018), highlighting how autonomy is continuously constructed in relationships with others. Respect from others, alongside self-respect, for example, are essential to living an autonomous life in this sense.

Note, however, that there can be “degrees” of autonomy in these different dimensions.

Practical autonomy requires psychological, physical, and social necessities for the development of capacities for autonomy, and exercise thereof, to be possible. But there are further necessities (which we call prerequisites of autonomy) that are needed. For example, one cannot fully develop capacities for autonomy, or exercise their autonomy, if

  • one lacks sufficient material and financial resources (e.g., one cannot afford (to choose between) commodities and services);
  • one is not recognized as a person of equal moral worth in the eyes of the law and governing social institutions (e.g., when one does not have equal rights);
  • one lacks sufficient information (e.g., one cannot evaluate whether they have been treated fairly in criminal justice processes); and
  • when there aren’t sufficient meaningful cultural resources available for developing one’s unique character (e.g., one cannot become a footballer without cultural practices related to the sport).

This is why we need to assess AI systems’ impact on the constitutive dimensions of autonomy, on the one hand, and several sociotechnical bases of autonomy, on the other, to fully gain a sense of their effects on human autonomy.

Can AI systems respect human autonomy?

The very capacities to autonomous self-determination ground the right to exercise them, and the demand that others respect that right. We should not interfere with autonomous persons’ deliberation and actions unless there are morally overriding reasons to do so. Given the value in being the author of one’s own life, however, helping others develop their autonomous capacities and exercise their agency is a morally praiseworthy goal. In this sense, we can also promote others’ autonomy by helping them develop and exercise it.

Now, given that AI systems are “mere machines”, can they literally respect a person’s autonomy? They can surely be obstacles to individuals’ autonomy similar to how other people and physical nature can. For example, I am equally prevented from exercising my physical agency when walking on a road regardless of whether my walking is obstructed by an overly eager salesperson keen on taking my money, or a fallen tree, or a malfunctioning delivery robot blocking my path. What distinguishes the tree and the delivery robot from the salesperson is that the former two cannot face normative demands because they are not moral agents. As a moral agent, the salesperson is subject to moral “ought-to-do” norms that govern how they should act. As a general yet overridable rule, they ought not prevent me from enjoying my Sunday walk, for instance.

Say what you want about salespeople’s morals, the fact that they are governed by these norms is a precondition for us to be able to assess their actions from a moral point of view – that is, to judge them, and to hold them accountable. Machines are not capable of recognition or respect (at least currently), and thereby do not have duties and “ought-to-do” norms that apply to them. I should not hold it morally accountable for obstructing my Sunday walk, respectively.

However, we argue that corresponding to the “ought-to-do” norms that govern human action, there are “ought-to-be”-norms that apply to designed artefacts. They ought to function appropriately and perform well in their tasks. Clocks, for example, ought to show time accurately. AI systems are in this respect like clocks: there are norms that govern how they ought to be. Designers of artefacts (AI systems included) ought to build them as they ought to be.

For human-made artefacts to be ethically acceptable, they ought to be such that they meet ethical requirements (alongside other requirements such as usability). For example, they ought to be fair, transparent, and such that their use promotes well-being and does not inflict unnecessary harm. They also ought to be such that they do not prevent or violate human autonomy. Violations of autonomy are the content that in the case of moral agents (e.g., the salesperson) creates duties, but in the case of machines (e.g., the delivery robot), creates “ought-to-be” norms with the same content. Adherence to these norms is what it means for AI systems to “respect” human autonomy.

How can AI systems fail to respect human autonomy?

Persons have the right to be recognized and treated as autonomous. When an agent coerces, deceives, or manipulates a person, this is a moral violation in virtue of being disrespectful of that person’s autonomy. The same goes for other violations of autonomy.

With the aim of helping designers align their AI systems with the principle of respect for human autonomy, we have collected notable violations of autonomy in the list below, explicating the corresponding ought-to-be norms for machines, respectively. (Note that we discuss prima facie violations, which can be overridden in specific cases.)


1. Direct interference

Definition: A physically prevents B from doing X.

Example: A robot obstructing a human’s path.

Ought-to-be-norm: An AI system A* ought be such that it does not prevent B’s freedom to do X.


2. Coercion, threats, and naked power

Definition: A forcing B to (choose to) do X instead of Y.

Example: Recommender system lacks an opt-out mechanism and/or does not generate (meaningful) alternatives.

Ought-to-be-norm: System A* ought to be such that it does not quasi-coerce B.


3. Manipulation, indoctrination, and deception

Definition: A manipulating B to value and desire X (or perhaps X instead of Y), thereby creating in B a disposition to (choose) X.¹

Example: Deepfakes lacking notification of inauthenticity (deception); recommender systems being optimized for engagement to the extent that they promote addiction (manipulation; see also cognitive heteronomy).

Ought-to-be-norm: System A* ought to be such that it does not manipulate, indoctrinate, or deceive B into doing X (instead of Y).


4. Nudging

Definition: With environmental cues, A goading B to do X rather than Y (when B has a predisposition to do either).

Example: Recommender systems, which personalize individuals’ choice architectures continuously and opaquely, have been termed ”hypernudging” machines (Yeung, 2017).

Ought-to-be-norm: Any quasi-intentional priming by A* should be such that it can be, when asked, openly declared, known, and accepted (by B).


5. Paternalism

Definition: A deciding on behalf of B, benevolently guided by A’s judgement of what is best for B.

Example: Self-tracking technologies have been argued to force paternalistic conceptions of well-being onto individuals, often by using “coveillance” mechanisms which are themselves problematic with respect to autonomy (see Lanzing, 2016; 2019).

Ought-to-be-norm: System A* ought to be such that it does not interfere with B’s decision regarding what is best for B.


6. Cognitive heteronomy

Definition: B willingly defers to A instead of forming one’s own judgement.

Example: Delegation of  tasks to machines can be understood as constituting (cognitive) heteronomy in this sense, although not all tasks are equally relevant for autonomy (see Danaher, 2018; Taylor, 1985). Automated decision-making systems can also actively sustain “automation bias”.

Ought-to-be-norm: System A* should be such that it supports autonomy and positive relations to self, and discourages deference.


7. Direct misrecognition/denial of autonomy

Definition: A not regarding B as capable of, or possessing the right to, self-determination.

Example: AI-generated predictions reproducing racial, gender, ableist (etc.) stereotypes. (See, e.g., Noble (2018) on search engines and racial stereotypes.)

Ought-to-be-norm: System A* should be such that it does not “send a message” that B were not capable of, or would not possess the right to, self-determination.


8. Misrecognition/denial of preferred labels

Definition: A not regarding B in light of the particular self-understandings that B has autonomously self-defined.

Example: Software “predicting” traits over which a person has first-person authority (e.g., gender, emotion); platforms lacking sufficient affordances for self-identification (e.g., non-binary genders)

Ought-to-be-norm: System A* should be such that it allows for B to be regarded in light of the particular self-understandings they have autonomously self-defined.


Now, one insight that can be gained by looking at these violations is that most applications of AI which seem to be in tension with human autonomy are so due to contingent design features (e.g., lack of transparency, insufficient privacy, lack of sufficient data categories). In other words, we suggest there is no intrinsic or necessary tension between human autonomy and AI. The upshot is that we can design autonomy-respecting AI by paying better attention to those functionalities, features, and other design aspects. Through better design (broadly construed), individuals can retain their “meta-autonomy” – the ability to decide when to decide (Floridi et al., 2018).

AI systems’ effects on prerequisites of autonomy

We have thus far considered violations of autonomy. To gain a fuller picture of AI systems’ impact on human autonomy, one needs to look at their wider effects on the prerequisites of autonomy in actual contexts: the implementation of AI systems can have broad and indirect effects which affect individuals’ meta-autonomy, and the development of autonomous capacities.

For example, if AI systems are used (intentionally or unintentionally) to undermine social and legal institutions that promote people’s access to rights, or to prevent their access to economic and material resources, this will have an effect on how people can develop and exercise their autonomy. If AI systems reduce the availability of cultural practices and resources (e.g., by homogenizing what cultural products we consume), they make the space of our cultural existence bleaker. When the range of options is sufficiently narrow, one can no longer exercise autonomy in choosing between them. In this sense, the sociotechnical bases of autonomy are equally important when considering how to design and use AI in an ethically sustainable manner.

Luckily, by directing our gaze towards these sociotechnical bases and technologies’ impact on them, we may find also novel ways of creating environments and communities that promote our autonomy and sustain and strengthen institutions that enable us to live autonomous lives in the digital society.

Notes

[1] All deception is not manipulation, and not all manipulation is deception, but both are violations of human autonomy.

References

Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., et al. (2018). AI4People-An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations. Minds Machines 28 (4), 689–707. doi:10.1007/s11023-018-9482-5

Kant, I. (1996). The Metaphysics of Morals. In The Cambridge Edition of the Works of Immanuel Kant: Practical Philosophy. Editor M. Gregor (Cambridge, UK: Cambridge University Press).

Laitinen, A. & Sahlgren, O. (2021). AI Systems and Respect for Human Autonomy. Front. Artif. Intell.. doi: 10.3389/frai.2021.705164.

Lanzing, M. (2016). The Transparent Self. Ethics Inf. Technol. 18 (1), 9–16. doi:10.1007/s10676-016-9396-y

Lanzing, M. (2019). ”Strongly Recommended” Revisiting Decisional Privacy to Judge Hypernudging in Self-Tracking Technologies. Philos. Technol. 32 (3), 549–568. doi:10.1007/s13347-018-0316-4

Mackenzie, C., and Stoljar, N. (2000). Relational Autonomy: Feminist Perspectives on Autonomy, Agency, and the Social Self. New York: Oxford University Press.

Noble, S. U. (2018). Algorithms of Oppression: How Search Engines Reinforce Racism. New York: New York University Press.

Riley, S., and Bos, G. (2021). Human Dignity. Internet Encyclopedia of Philosophy. Available at: www.iep.utm.edu/hum-dign/(accessed May 2, 2021).

Rubel, A., Castro, C., and Pham, A. (2020). Algorithms, Agency, and Respect for Persons. Soc. Theor. Pract. 46 (3), 547–572. doi:10.5840/soctheorpract202062497

Rubel, A., Clinton, C., and Pham, A. (2021). Algorithms & Autonomy: The Ethics of Automated Decision Systems. Cambridge: Cambridge University Press.

Stoljar, N. (2018). Feminist Perspectives on AutonomyThe Stanford Encyclopedia of Philosophy (Winter 2018 Edition), Edward N. Zalta (ed.).

Taylor, C. (1985). What’s Wrong with Negative Liberty? in his Philosophy and the Human Sciences: Philosophical Papers 2. Cambridge: Cambridge University Press.

Yeung, K. (2017). ’Hypernudge’: Big Data as a Mode of Regulation by Design. Inf. Commun. Soc. 20 (1), 118–136. doi:10.1080/1369118x.2016.1186713