Trust in AI-assisted Decision Making: Perspectives from Those Behind the System and Those for Whom the Decision is Made
Abstract
Trust between humans and AI in the context of decisionmaking has acquired an important role in public policy, research and industry. In this context, Human-AI Trust has often been tackled from the lens of cognitive science and psychology, but lacks insights from the stakeholders involved. In this paper, we conducted semi-structured interviews with 7 AI practitioners and 7 decision subjects from various decision domains. We found that 1) interviewees identified the prerequisites for the existence of trust and distinguish trust from trustworthiness, reliance, and compliance; 2) trust in AI-integrated systems is strongly influenced by other human actors, more than the system's features; 3) the role of Human-AI trust factors is stakeholder-dependent. These results provide clues for the design of Human-AI interactions in which trust plays a major role, as well as outline new research directions in Human-AI Trust.
Fichier principal
clean___CHI24__Interviews_with_AI_Practitioners_and_Model_Subjects.pdf (769.99 Ko)
Télécharger le fichier
Origin | Files produced by the author(s) |
---|