Incentivizing Exploration With Causal Curiosity as Intrinsic Motivation - Sorbonne Université
Communication Dans Un Congrès Année : 2024

Incentivizing Exploration With Causal Curiosity as Intrinsic Motivation

Résumé

Reinforcement learning (RL) has shown remarkable success in decision-making tasks but often lacks the ability to decipher and leverage causal relationships in complex environments. This paper introduces a novel "causal model-based reinforcement learning agent" that integrates causal inference with model-based RL to improve exploration and decision-making. Our approach incorporates an intrinsic motivation mechanism based on causal curiosity, quantified by the changes in the agent's internal causal model. We present an algorithm that maintains separate value functions for extrinsic rewards and intrinsic causal discovery, allowing for a balanced exploration of both task-oriented goals and causal structures. Theoretical analysis suggests convergence properties under certain conditions, while empirical results in a blackjack task and structural causal model environments demonstrate improved learning efficiency and strategic decision making compared to standard RL. This work contributes to bridging the gap between reinforcement learning and causal inference.
Fichier principal
Vignette du fichier
26_Incentivizing_Exploration_W-3.pdf (1.01 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)
Licence

Dates et versions

hal-04822149 , version 1 (06-12-2024)

Licence

Identifiants

  • HAL Id : hal-04822149 , version 1

Citer

Elias Aoun Durand, Mateus Joffily, Mehdi Khamassi. Incentivizing Exploration With Causal Curiosity as Intrinsic Motivation. Intrinsically Motivated Open-ended Learning workshop at NeurIPS 2024, Dec 2024, Vancouver, Canada. ⟨hal-04822149⟩
0 Consultations
0 Téléchargements

Partager

More