Adaptive optics control with multi-agent model-free reinforcement learning - Sorbonne Université Accéder directement au contenu
Article Dans Une Revue Optics Express Année : 2022

Adaptive optics control with multi-agent model-free reinforcement learning

Résumé

We present a novel formulation of closed-loop adaptive optics (AO) control as a multi-agent reinforcement learning (MARL) problem in which the controller is able to learn a non-linear policy and does not need a priori information on the dynamics of the atmosphere. We identify the different challenges of applying a reinforcement learning (RL) method to AO and, to solve them, propose the combination of model-free MARL for control with an autoencoder neural network to mitigate the effect of noise. Moreover, we extend current existing methods of error budget analysis to include a RL controller. The experimental results for an 8m telescope equipped with a 40x40 Shack-Hartmann system show a significant increase in performance over the integrator baseline and comparable performance to a model-based predictive approach, a linear quadratic Gaussian controller with perfect knowledge of atmospheric conditions. Finally, the error budget analysis provides evidence that the RL controller is partially compensating for bandwidth error and is helping to mitigate the propagation of aliasing.
Fichier principal
Vignette du fichier
oe-30-2-2991.pdf (3.28 Mo) Télécharger le fichier
Origine : Publication financée par une institution

Dates et versions

hal-03549248 , version 1 (31-01-2022)

Identifiants

Citer

B. Pou, F. Ferreira, E. Quinones, D. Gratadour, M. Martin. Adaptive optics control with multi-agent model-free reinforcement learning. Optics Express, 2022, 30 (2), pp.2991-3015. ⟨10.1364/OE.444099⟩. ⟨hal-03549248⟩
59 Consultations
45 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More