Towards an explainable and convivial AI based tools: Illustration on medicine applications - Sorbonne Université
Communication Dans Un Congrès Année : 2019

Towards an explainable and convivial AI based tools: Illustration on medicine applications

Résumé

Since 2010, the numerical Artificial Intelligence (AI) based on Machine Learning (ML) has produced impressive results, mainly in the fields of the pattern recognition and the natural language processing, succeeding to the previous dominance of the symbolic AI, centered on the logical reasoning. The integration of ML methods into industrial processes gives hope for new growth drivers. These impressive results could be considered in a first approach as the end of the mathematical models as the statistical analysis is able to reproduce phenomena. In true, Machine Learning is based on inductive models theorized by Francis Bacon in 1620. The use of inductive models requires to explain the prediction obtained on data, which is currently not often the case for industrial Machine Learning applications. Consequently, the operational benefit of using Machine Learning methods is reco\-gnized but is hampered by the lack of understanding of their mechanisms, at the origin of operational, legal and ethical operational problems. This affects highly the operational acceptability of AI tools. This is largely dependent on the ability of engineers, decision-makers and users to understand the meaning and the properties of the results produced by these tools. In addition, the increasing delegation of decision-making offered by AI tools competes with tried and tested business rules, sometimes constituting certified expert systems. Machine Learning could be thus consider now as a colossus with a feet of clay. It is important to note that this difficult problem will not be solved only by mathematicians and by computer scientists. Indeed, it requires a large scientific collaboration for example with philosophers of science to investigate the properties of the inductive model, cognitive psychologists to evaluate the quality of an explanation and anthropologists to study the relation and the communication between humans and these AI tools. The first part of the talk presents the challenges and the benefits coming from Artificial Intelligence for Industry and Services, in particular for the medicine. Medicine is changing in depth its paradigm moving from a reactive to a proactive discipline for reducing the costs while improving the healthcare quality. It is useful to remember that, before the success of Machine Learning, some automatic healthcare tools have been developed. For example, the MyCin healthcare program , developed in the seventies at Standford University, was developed to identify bacteria causing severe infections, such as bacteremia and meningitis and recommend antibiotics, with the dosage adjusted for patient's body weight. It was based on the Good Old-Fashioned AI (expert system). It is relevant to see that MyCin was never actually used in practice not for any weakness in its performance but largely for ethical and legal issues related to the use of computers in medicine. It was also already difficult to explain the logic of its operation and even more to detect contradictions. The second part of the talk summarizes our research activities conducted with Frank Varenne, philosopher of science, and Judith Nicogossian, anthropobiolgist. Its main objectives is to provide and evaluate explanations of ML methods tools considered as a black box. The first step of this project, presented in this talk, is to show that the validation of this black box differs epistemologically from the one set up in the framework of mathematical and causal modeling of physical phenomena. The form of the explanation has to be evaluated and chosen to minimize the cognitive bias of the user. This also raises an ethical problem about the possible drift of producing more persuasive and transparent explanations. The evaluation must therefore take into account the management of the compromise between the need for transparency and the need for intelligibility. Another important point concerns the conviviality of the AI based tool that is to say the user's capability to work with independent efficiency. A philosophical and anthropological approach is required to define the conviviality of an AI tool which will be translated in terms of rules guiding its conception. Last but not least, an anthropological standpoint will be summarized in particular in the definition of the nature and the properties of the "phygital" communication, between IA and users. Finally, the last part of the talk proposes some future research directions needed in our opinion to be included the CHIST-ERA program.
Fichier non déposé

Dates et versions

hal-02184552 , version 1 (16-07-2019)

Identifiants

  • HAL Id : hal-02184552 , version 1

Citer

Christophe Denis. Towards an explainable and convivial AI based tools: Illustration on medicine applications. CHIST-ERA Conference 2019, Explainable Machine Learning-based Artificial Intelligence, Jun 2019, Tallinn, Estonia. ⟨hal-02184552⟩
305 Consultations
0 Téléchargements

Partager

More