Few-shot Quality-Diversity Optimization - Sorbonne Université
Article Dans Une Revue IEEE Robotics and Automation Letters Année : 2022

Few-shot Quality-Diversity Optimization

Résumé

In the past few years,a considerable amount of research has been dedicated to the exploitation of previous learning experiences and the design of Few-shot and Meta Learning approaches, in problem domains ranging from Computer Vision to Reinforcement Learning based control. A notable exception, where to the best of our knowledge, little to no effort has been made in this direction is Quality-Diversity (QD) optimization. QD methods have been shown to be effective tools in dealing with deceptive minima and sparse rewards in Reinforcement Learning. However, they remain costly due to their reliance on inherently sample inefficient evolutionary processes. We show that, given examples from a task distribution, information about the paths taken by optimization in parameter space can be leveraged to build a prior population, which when used to initialize QD methods in unseen environments, allows for few-shot adaptation. Our proposed method does not require backpropagation. It is simple to implement and scale, and furthermore, it is agnostic to the underlying models that are being trained. Experiments carried in both sparse and dense reward settings using robotic manipulation and navigation benchmarks show that it considerably reduces the number of generations that are required for QD optimization in these environments.
Fichier principal
Vignette du fichier
2109.06826v3.pdf (1.63 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03569179 , version 1 (07-10-2024)

Identifiants

Citer

Achkan Salehi, Alexandre Coninx, Stephane Doncieux. Few-shot Quality-Diversity Optimization. IEEE Robotics and Automation Letters, 2022, 7 (2), pp.4424 - 4431. ⟨10.1109/LRA.2022.3148438⟩. ⟨hal-03569179⟩
98 Consultations
5 Téléchargements

Altmetric

Partager

More