How Might AI Challenge Design Practices?

Blog post by Thomas Olsson

The writer is an Associate Professor in Human-Technology Interaction at Tampere University. His research approaches socio-technical systems and social computing with a critical voice and an aspiration to envision more sustainable ICT.

With the ongoing AI hype cycle and the recent technological developments, artificial intelligence has become the bread and butter of our digital lives. On daily basis, a typical WEIRD user interacts with or is influenced by various machine learning based search engines, recommender systems, photography enhancement tools, user profiling features, and other forms of so-called narrow AI. In the ongoing commodification of AI, the role of design practice is ever more important, yet remains largely unrecognized.

This blog post reconsiders design of digital systems as a profession in the AI era. It asks how might the design of information systems, AI apps, and other digital things change with the new building blocks that AI enables. How do the ethical considerations of AI challenge the conventions in service design and interaction design? How should we be imagining, conceptualizing, and crafting AI services?

Key take-aways:

  • AI and the related ethical questions introduce new challenges and quality attributes to consider in design.
  • To discern changes in design practice, consider 5 Ps: Purpose, Product, People, Principles, and Process.
  • Guidelines for AI design & ethics are inherently abstract, so they must be contextualized and concretized in each design case.

 

Introduction to (or Reminder of) Designership

Let’s first recap who designers are and what do they do. This post mainly targets people thinking about the purpose of existence of a given information system, so-called concept design (people holding professional titles such as service designer, user experience designer, product owner). However, design work spans broader than these jobs, so I welcome anyone participating in the production of digital products to reflect on this text (e.g., visual designers, software developers and architects, project managers).

Broadly defined, all design is concerned with change and preferred futures, mindful of specific goals and constraints​. That is, there are certain practical intents that are aimed at certain directions under certain conditions. While the repeating word “certain” might sound accurate, it is in fact case-specific and defined by the project at hand: client, technology, target users, context of use, etc.

“Everyone designs who devises courses of action aimed at changing existing situations into preferred ones.” – Herbert Simon​, Nobel laureate in Economics

“Design is the ability to imagine that-which-does-not-yet-exist, to make it appear in concrete form as a new purposeful addition to the real world.” – Harold Nelson & Erik Stolterman

A bit more concretely, the activity of design applies imagination and constructive forethought to practical problems, uses drawings and other modelling media as means of problem solving, is intended to result in novel solutions, demands toleration of uncertainty, and involves incomplete information (Lawson & Dorst 2009). In the design of digital artefacts, there are various well-established methods, tools, and practices to address each of these aspects, spanning from user research and analysis methods to idea creation and prototyping techniques.

I assume that most designers can well identify with the notions of imagination, problem-solving, and hunt for novelty. After all, those are the kinds of things that we are educated (by schools) and incentivized (by the clients and overall economy) to do. We are taught to become change agents.

Perhaps a less prominent element in a designer’s identity is its moral aspect, professional ethics. This is highlighted above by words like “preferred”, “purposeful” and “constraints”. In other words, designers have the power to define preferences and values that their creations follow. Hence, they have responsibility as professionals and are expected, by the surrounding society, to work for goals that are desirable by most.

The moral dimension is ever more important to understand when designing AI: the effects of the products and services can be broad and long-term, as extensively discussed in an earlier blog post about AI ethics and the question of accountability. As Nelson & Stolterman (2002) well put it, “we cannot know what the unintended consequences of a design will be, and we cannot know, ahead of time, the full, systemic effects of a design implementation. We can be godlike in the cocreation of the world, yet we cannot be godlike in our guarantee that the design will be only what we intended it to be […]”. If an AI service produces, for example, fairness issues or detrimental behavioral effects — whether foreseeable or unexpected, intended or unintended — the designers can rightfully feel accountable and try to repair the damage.

Well, perhaps this rant is enough to remind of designers’ responsibility in this text. If you’re interested more in designers’ professional ethics, feel free to check this book or this lecture, for example. ​

But what factors influence how design is done and what values condition design activity? Based on quick brainstorming, at least the following: professional education, organization and team culture, clients’ interests and operation culture, the professional design community—not to mention the economic system and broader cultural norms. As designers don’t have to take an oath upon graduation (like the Hippocratic oath of medical doctors), the practice and values in design can only change by affecting all of the aforementioned factors.

Therefore, an important meta-message in this post is that all of these institutional and cultural powers need to revisit the question of how is (and should be) design work directed and organized. This post is an attempt to provide some starting points and reflections to inspire change.

What to expect when you’re expecting AI?

Over the last few decades, AI has been the target of exceptionally extensive speculation. In addition to predicting the pace of technological development, people attempt to produce various socio-technical imaginaries of what might life be with AI as well as to forecast the kinds of economic, cultural, societal, and behavioral trends this much-hyped technology might yield. Building on various such visions, the following highlights some general characteristics of AI that I consider particularly relevant for designers to keep in mind. It is noteworthy that these considerations are valid already with the present-day manifestations of narrow AI, not requiring any super-human general AI.

First, the development of AI seems to correlate with increasing agency of technology in general. That is, the level of automation​ is steadily increasing, digital systems are unquestionably influencing people’s behavior, and autonomous algorithms are trusted even in activities and decisions that traditionally have been at the discretion of human actors—social security decisions, school ranking, and recruitment to mention a few examples (Eubanks, 2018; O’Neil, 2016).

This trend brings opportunities to improve specific processes, increase productivity, automate undesirable routine tasks, etc. However, it also involves risks of overly optimistically delegating complex decision-making and reasoning to opaque AI applications. Mindful of the risks, how should we then design AIs that take part in very complex and high-stakes decision-making, perhaps in collaboration with a human user?

Second, and related to the previous, AI seems to strengthen the general trend of technology becoming increasingly pervasive: technologies tend to enter also application areas that people might consider most humane, non-technical, or intimate (e.g., personal coaching, healthcare, relaxation, sex). Despite the possible benefits in also the most questionable application areas, designers will need to deal with the risks of societal rejection and lack of trust​. This raises the question of how to define appropriate roles for the technology from the viewpoint of the target activity and user group. How to ensure that surrounding culture accepts what might appear a technological intrusion or intervention?

The third point is the rapidly evolving nature of AI systems, in contrast to the more conventional monolithic information systems that might remain unchanged for years. Especially machine learning enables the systems to change (improve or worsen) by being used and by the accumulation of new data over time. Considering design work, this calls for continuous monitoring to identify and help avoid unintended consequences as well as for so-called repair work ​to amend the potentially caused harm. Often the collateral damage is rather irreparable than an easy fix, implying that preventive mechanisms are more preferable than retrospective repair.

Hence, designers need insight into the long-term socio-technical implications that their designs might result in. This further stresses the need for new assessment methods and practices, according to different potential effects. On a positive note, the evolving nature is also a business opportunity that can spur the shift from product business to service business​, which also changes the nature of design work.

Fourth, AI challenges how the notion of interaction should be considered and structured. Aside the general trend of diversification of human-computer interaction techniques (gestural and natural speech based interaction, brain-computer interfaces, etc.) the whole mindset of interaction could evolve from “human-computer interaction” to “human-machine integration” or “human-machine teaming” (Xu, 2019)​. So, rather than designing user interfaces, perhaps we should start designing scripts for collaboration and rules of partnership—whatever that might mean in practice? A suitable analogue could be interaction with pets; as the intelligence of your interactee increases (e.g., a chinchilla vs. a dog), the nature of interaction tends to significantly change, becoming more two-directional, abstract, and semantically rich.

Finally, related to technology development in general, there has been a strong ethical awakening in both developer organizations and the public discussion​ around digital technology. People are demanding responsibility, fairness, transparency​, and bias-free decisions from algorithms (Shilton, 2018), perhaps even more strongly than from each other. In other words, the moral intensity of techno-ethical issues in general seems to be rising​.

Considering the abovementioned aspects and the growing speed of AI development, design work is continuously challenged by new perspectives and quality attributes to design for. Only 25 years back, usability was still a relatively new aspect to cater for; 15 years back came the broader yet more elusive concept of user experience; and perhaps now is the time for an even-more elusive concept of ethicality.

 

How might AI change design practice?

Now, a very practical question arises: what elements in design work might change? As a rudimentary theory of AI-design dynamics, let me introduce 5Ps: Purpose, People, Product, Principles, Process.

Purpose refers to the key drivers of design work. Design will fundamentally be about problem solving also in the future; it’s a profession that basically solves others’ problems for money, as conditioned by the economic system. However, what AI’s increasing integration into everyday life might change are the concrete targets and metrics that incentivize design work, such as Key Performance Indicators and metrics of success. Can performance or success anymore be defined at the point of release or only after a running-in period? The “definition of done” will likely change as the product evolves over time, becoming a moving target.

This underlines the importance of post-release follow-up, comprehensive reflection​, and defining intervention and repair mechanisms for products already in use. That is, the follow-up could not only be about “software maintenance” (technical viewpoint) or “UI patches” (usability viewpoint) but also about the developers’ responsibility to see that the system is used appropriately—even pulling the plug off, if necessary.

More broadly, I could bet that ethical perspectives will affect the very purpose of design as an engine of product innovation. Ethicality and sustainability are already becoming business assets and differentiators in the global competition. Perhaps the organizations optimizing for wellness of communities will be valued more than those optimizing for the experience of an individual?

People relates to for whom to design. The general idea of catering for different user groups and cultures is a valid goal also when designing AI applications. This issue has already been extensively addressed by movements like Design4all, universal design, and inclusive design.

What new AI perhaps introduces is the different forms of usership​, i.e., positions of being a user. For example, use of proactive and ubiquitous services like ambient voice assistants emphasizes the notion of secondary user: to avoid conflicts or interpretation errors, other people in the room often need to adapt their behavior while the system is being used. Especially when collocated, they might be co-experiencing the output​ but negotiation of the active user is necessary while commanding the system. Passive use could mean that your behavior (e.g., speech) is continuously monitored by a system.​ Unintended inputs and commands (i.e., “false positives”, ) can also be made by people who did not wish to use the system​ in the first place. One’s autonomy of choosing to opt out might hence weaken.

Furthermore, usership is increasingly dynamic: continuing on the same example, anyone can start using the voice assistant with a simple utterance. Compared to changing between user profiles on a laptop, for example, the instantaneity in this case requires careful design of affordances.

Product refers to what kind of entities or artefacts are designed. Perhaps we will transfer from designing instrumental tools to designing partners that augment our activities and capabilities (e.g., replika.ai). As users, we might turn from commanders and controllers into teachers or coaches of our AI friends. The idea of post-instrumental functions of technology was already much discussed in the early years of user experience, and I suspect that AI will only take this idea further.

Another perspective is that the objects of design might change from stand-alone software products and services to features or modules on existing platforms and services​. With platform economy this development is already ongoing, and this is likely to speed up as the designed solutions become more generally applicable and transferable between application areas. For example, already current search engines and recommender engines tend to be re-used across products and contexts of use, so why would the same not happen with “fairness engines” or “ethical self-assessment modules”?

Principles are the fundamental propositions and laws that shape how a designer understands and solves design problems. The recent AI ethics literature highlights the importance of, e.g., Fairness and Explainability, which both are relatively new as designable quality attributes (i.e., principles). In general, top quality tends to be an elusive goal in any design. Now, the constantly updating principles demands sensitivity to identify emerging qualities as well as agility to cater for them in the design work. Furthermore, if the expectations of AI bringing paradigmatic changes to our lives realize, there will likely be completely new benchmarks that any design work will be contrasted with.

Different design movements and patterns will likely advocate different perspectives, diversifying the perceptions of what is of high quality. At the same time, the idea of utilizing design patterns, design languages, and legacy solutions involves risks: when algorithmic models and procedures are transferred from one problem to another, they can be decontextualized and mis-appropriated such that, for example, new types of emergent bias are unintentionally introduced (Friedman & Nissenbaum, 1996).

In other words, mind the gap between your benchmarks and the current case: rather than mimicking earlier solutions, it might be wiser to expand one’s own problem-solving capabilities and epistemic landscapes (as underlined in an earlier blog post). This can include strengthening skills in, e.g., envisioning probable futures (e.g., what-if-scenarios, alternative socio-technical imaginaries), identifying new sources of design inspiration, and identifying the possible gaps between intentions (in design) and the eventual use of technology (Albrecthslund, 2007)​.

Process is about how design is practically structured​ as hands-on, collaborative activity. Echoing what was said earlier, the processes and organizational practices should permit and encourage consideration of new quality attributes, as well as prevent practices that jeopardize ethical quality or lead to “stupid mistakes” resulting from unawareness. While design has conventionally been an isolated process accessible and influenceable by few, the growing need to understand the systemic effects of technology calls for making design processes more inclusive and diversity-welcoming. This is necessary for advancing development teams’ systems thinking skills from the current beginner level towards true competence and expertise.

With respect to practices, the wicked issue of ethical responsibility and its management during the design process calls for new kinds of review and approval processes. Perhaps we also need new methods for defining the design problems from multiple perspectives, for producing a breadth of conceptually different alternative solutions, and for assessing their quality from various stakeholders’ viewpoints.

To further speculate, perhaps the nearly-axiomatic agile & lean development processes need to be rethought because they tend to favour speed over deliberation. Might we even need to get back to good-old-heavy specification documents? Is it “agile is dead” or “long live agile”?​ In any case, the famous (though disputed) Zuckerberg quote ”move fast and break things”, as an invitation to not care, will likely not remain as well-respected as it used to be.

Finally, considering the need for additional skills during the design process, who could/should be involved? New roles like digital ethics assessment experts, visionaries of alternative futures, empathy trainers, and ethics compliance officers have been much discussed (see, e.g., Wilson et al., 2017), and it will be interesting to see what kind of competences will future design teams embody.

 

How about actionable design methods and guidance?

This post has mainly speculated what aspects of design might change, however not much guiding how to deal with these changes. If you were seeking for ready-made recipes, I’m sorry to disappoint you. Unfortunately, there are no easy answers, and the generally applicable processes are deemed abstract. The ethical aspects of AI are characterized by complexity, flexibility, and pluralism; the solutions tend to be compromises between several, often conflicting, principles, so the most optimal (or least bad) designs can only be reached through discussion and careful analysis of the case at hand.

On a positive note, there are some—actually plenty of—resources available, trying to inform designers and develop their techno-ethical sensitivity. While many of the remain quite abstract, let me list couple of links to some of the most insightful that our KITE project has come across with.

Finally, to provide my 2 cents to this issue of lack of actionable guidance, I promise another post that surveys various design methodologies that might help. In particular, I plan to survey various thought-provoking, non-mainstream design approaches, such as critical design, value-sensitive design, stakeholder-centric design, and discursive design. More on that later.

 

References

  • Albrechtslund, A. (2007) Ethics and Technology Design. Ethics and Information Technology, 9:63–72, Springer. DOI 10.1007/s10676-006-9129-8
  • Eubanks, Virginia. Automating Inequality – How High-Tech Tools Profile, Police, and Punish the Poor. 2018.
  • Friedman, Batya and Nissenbaum, Helen (1996). Bias in computer systems. ACM Trans. Inf. Syst. 14, 3 (July 1996), 330–347. DOI:https://doi.org/10.1145/230538.230561
  • Lawson, Bryan and Dorst, Kees (2009). Design Expertise. Taylor & Francis.
  • Nelson, H. & Stolterman, E. (2002) The Design Way. MIT Press.
  • O’Neil, Cathy (2016). Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy.
  • Shilton, Katie (2018) Values and Ethics in Human-Computer Interaction. Foundations and Trends in Human-Computer Interaction: Vol. 12, No. 2, pp 107–171.
  • Wilson, H.J., Paul R. Daugherty, and Nicola Morini-Bianzino (2017) The Jobs That Artificial Intelligence Will Create. MIT sloan management review, March 2017.
  • Xu, Wei (2019). Toward Human-Centered AI: a Perspective From Human-Computer Interaction. ACM Interactions, 4 July-August 2019