Artificial intelligence (AI) is embedded in a wide variety of Smart City applications and infrastructures, often without the citizens being aware of the nature of their “intelligence”. AI can affect citizens’ lives concretely, and thus, there may be uncertainty, concerns, or even fears related to AI. To build acceptable futures of Smart Cities with AI-enabled functionalities, the Human-Centered AI (HCAI) approach offers a relevant framework for understanding citizen perceptions. However, only a few studies have focused on clarifying the citizen perceptions of AI in the context of smart city research
Lehtiö, Anu, Maria Hartikainen, Saara Ala-Luopa, Thomas Olsson, and Kaisa Väänänen. ”Understanding citizen perceptions of AI in the smart city.” AI & SOCIETY (2022): 1-12.
How to design human-centered AI solutions – review of the human-centered AI design.
Human-Centered AI (HCAI) advocates the development of AI applications that are trustworthy, usable, and based on human needs. While the conceptual foundations of HCAI are extensively discussed in recent literature, the industry practices and methods appear to lag behind. To advance HCAI method development, current practices of AI developer companies need to be understood. To understand how HCAI principles manifest in the current practices of AI development, we conducted an interview study of practitioners from 12 AI developer companies in Finland, focusing on the early stages of AI application development.
Maria Hartikainen, Kaisa Väänänen, Anu Lehtiö, Saara Ala-Luopa, and Thomas Olsson. 2022. Human-Centered AI Design in Reality: A Study of Developer Companies’ Practices : A study of Developer Companies’ Practices. In Nordic Human-Computer Interaction Conference (NordiCHI ’22), October 08–12, 2022, Aarhus, Denmark. https://doi.org/10.1145/3546155.3546677
Human-centered AI aims to bring efficient solutions to user problems and provide positive and beneficial outcomes to the users, to those affected by their operation, and to society in general. The use of AI might bring new AI-related factors and requirements that should be acknowledged in the development. We are aiming to increase knowledge of HCAI requirements in AI developer companies with a maturity model that is appropriate to practical use, gives comprehensive guidance, and provides helpful tools and toolkits to promote the practical implementation of the model.
Read more about the model development here
Applications of artificial intelligence (AI) are increasingly deployed to support complex expert work, such as the recruitment of workforce to organizations. Amidst the push of new e-recruitment systems by technology providers, there is little research on recruitment experts’ views on trusting AI in their work, particularly concerning user needs, opportunities for employing AI, and considerations regarding trust in AI. To understand recruitment experts’ perceptions of the future use of AI in their work, we conducted an interview study with Finnish recruitment experts (N=15). The findings underline the need for AI as augmentation: AI could offer analytical competencies that complement or challenge the recruitment experts’ analysis and deliberation.
Developers’ and accounting practitioners’ perceptions of trust in AI as intelligent automation – report of company collaboration with fabricAI.
Chatbots have been spread widely to online customer service due to recent technological advancements. For chatbots’ successful utilisation as customer servants, it is essential that they are developed for human users in a way that meets their needs. Human-centred design (HCD) puts humans in the centre of the development. However, research-based information of HCD for service chatbots is still scarce. Hence, we reviewed recent literature to explore and map the relevant themes that are recurrent in chatbot research to determine the building blocks of HCD for service chatbots.
Human-AI interaction and UI prototypes
This article investigates model inversion attacks against machine learning models in a black-box setting. On the one hand, an adversary can extract feature vectors of samples in a local dataset, while the underlying model’s architecture and parameters are unknown. On the other hand, the adversary has illegitimate access to feature vectors of user data. We thoroughly analyze the following two attack scenarios on state-of-the-art models in person reidentification: recognizing auxiliary attributes and reconstructing user data.
Ni, Xingyang, Heikki Huttunen, and Esa Rahtu. ”On the Importance of Encrypting Deep Features.” In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 4142-4149. 2021.
Semantic segmentation is a task of assigning a label to every pixel in an image. It is also often referred to as the classification of images at pixel level. It plays an essential role in AI systems, for instance, self-driving cars, medical image processing, retail image analysis, scene understanding, and many other life impacting use cases. Semantic segmentation also achieves promising results in the application of damage detection.
Since neural networks are data-hungry, incorporating data augmentation in training is a widely adopted technique that enlarges datasets and improves generalization. On the other hand, aggregating predictions of multiple augmented samples (i.e., test-time augmentation) could boost performance even further. In the context of person re-identification models, it is common practice to extract embeddings for both the original images and their horizontally flipped variants. The final representation is the mean of the aforementioned feature vectors. However, such scheme results in a gap between training and inference, i.e., the mean feature vectors calculated in inference are not part of the training pipeline. In this study, we devise the FlipReID structure with the flipping loss to address this issue.
Methods and recommendations for the development of human-centered artificial intelligence
This map defines and presents the main concepts related to AI, and their relations.
Machine learning–based systems have become the bread and butter of our digital lives. Today’s users interact with, or are influenced by, applications of natural language processing and computer vision, recommender systems, and many other forms of so-called narrow AI. In the ongoing commodification of AI, the role of design practice is increasingly important; however, it involves new methodological challenges that are not yet solved or established in design practice.
Olsson, Thomas, and Kaisa Väänänen. ”How does AI challenge design practice?.” Interactions 28, no. 4 (2021): 62-64.
A comprehensive design package to support the development and design of human-centered artificial intelligence.
This review aims to understand trust in technology and especially trust in artificial intelligence focusing on a human-centered perspective. We present trust in AI from three different perspectives: academic perspective, industry perspective, and governmental perspective. This article aims to answer the following questions: a) What is trust in technology and in artificial intelligence, and b) how can trust in artificial intelligence be built and maintained?
Ethical recommendations for the development of artificial intelligence applications
Ethical recommendations to support the development of artificial intelligence.
- Ethics guidelines as the main output, which compiles the findings of the project and presents the principles for ethical artificial intelligence.
- The guidelines also contain policy recommendations.
- Guidance for integrating AI ethics at the organizational level
- Guidelines for operationalizing the ethics of artificial intelligence at the level of technology planning and development
- Guidance on artificial intelligence and respect for human autonomy
The concept of social pathology has long belonged to the toolkit of social scientists, and several critical social philosophers have found it indispensable for linking social ontology to social criticism. While different conceptions of social pathology, as well as their applicability as diagnostic tools for social wrongs, have been debated, a common area of neglect becomes apparent when we consider pathological states of social wholes, such as societies, as not only socially but technically constituted.
Sahlgren, Otto. ”Towards a Conception of Sociotechnical Pathology.” (2021).Proceedings of the Conference on Technology Ethics 2021 – Tethics 2021
Some comparisons yield puzzling results. In the puzzling cases, neither item is determinately better than the other, but they are not exactly equal either, as improving one of them just slightly still does not make it determinately better than the other. What does this kind of incommensurability or incomparability mean for robots? We discuss especially Ruth Chang’s views, arguing for four claims.
Laitinen, Arto, and Otto Sahlgren. ”Incommensurability, Incomparability, and Robotic Choices.” In Social Robots in Social Institutions: Proceedings of Robophilosophy, pp. 389-398. IOS Press, 2022.
This paper discusses value relations and questions of incommensurability and incomparability in the context of machine learning and fairness therein. We examine three stances and consider their implications for machine learning supported decision-making and the pursuit of fair algorithms using a hypothetical example from recruitment.
Sahlgren, Otto, and Arto Laitinen. ”Computing Apples and Oranges?: Implications of Incommensurability for (Fair) Machine Learning.” (2022).