Too broad to handle: can we "fix" harmonised standards on artificial intelligence by focusing on vertical sectors? - Département Sciences économiques et sociales
Pré-Publication, Document De Travail Année : 2024

Too broad to handle: can we "fix" harmonised standards on artificial intelligence by focusing on vertical sectors?

Résumé

The European approach to regulating Artificial Intelligence (AI) has relied on three main regulatory mechanisms: ethics charters, the AI Act and technical standards. Europe has based this approach on concepts such as "trustworthiness" or "risk", navigating a semantic sphere where the ethical, legal and technical fields clash. The origins of this approach in ethics charters, which usually focus on broad principles, have led to the dissemination in the AI Act and in standards of a very general discourse about AI, which rarely goes into technical detail, and with elements that are unimplementable as is. Additionally to this broadness of principles and requirements, the European discourse on AI, whether in ethics charters, the AI Act or standards, has also remained very horizontal. While the AI Act classifies high-risk systems according to their sector of use, the obligations applicable to them are the same regardless. This poses a problem for standards, which are forced to remain at a high level, as the technical requirements are too difficult to define without contextual elements. We therefore propose to refocus standards on vertical sectors, allowing them to define stricter requirements.
Fichier principal
Vignette du fichier
Gornet_verticals.pdf (335.73 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04785208 , version 1 (15-11-2024)

Identifiants

  • HAL Id : hal-04785208 , version 1

Citer

Mélanie Gornet. Too broad to handle: can we "fix" harmonised standards on artificial intelligence by focusing on vertical sectors?. 2024. ⟨hal-04785208⟩
4 Consultations
0 Téléchargements

Partager

More