What AI Practitioners Say about Human-AI Trust: Its Role, Importance, and Factors That Affect It
Résumé
Establishing appropriate human trust in artificial intelligence (AI) has become a priority while developing AI-embedded systems. Understanding what plays a role for Human-AI trust is crucial to achieve this goal. In this working paper, we investigate how AI practitioners perceive and consider Human-AI trust for design and deployment of the AI-embedded systems for decision making support in the field. Our preliminary results stem from 5 interviews with AI practitioners. We identified that 1) Human-AI trust mostly remains an after-thought research topic; 2) it plays an important role for decisions associated with risks and for complex tasks; 3) AI practitioners consider these aspects when establishing Human-AI trust: AI performance and error, explainability of AI, Human-Human trust, and interaction with AI over time. These preliminary results have direct implications on future research directions around Human-AI trust and on design and deployment of AI-embedded systems. To further advance our fundamental understanding of Human-AI trust, we plan on interviewing additional AI practitioners as well as other stakeholders of related AI-embedded systems in decision making context and beyond.
Origine | Fichiers produits par l'(les) auteur(s) |
---|