SALAD: Self-Assessment Learning for Action Detection - Archive ouverte HAL Access content directly
Conference Papers Year :

SALAD: Self-Assessment Learning for Action Detection

Abstract

Literature on self-assessment in machine learning mainly focuses on the production of well-calibrated algorithms through consensus frameworks i.e. calibration is seen as a problem. Yet, we observe that learning to be properly confident could behave like a powerful regularization and thus, could be an opportunity to improve performance. Precisely, we show that used within a framework of action detection, the learning of a self-assessment score is able to improve the whole action localization process. Experimental results show that our approach outperforms the state-of-the-art on two action detection benchmarks. On THUMOS14 dataset, the mAP at tIoU @0.5 is improved from 42.8% to 44.6%, and from 50.4% to 51.7% on Activ-ityNet1.3 dataset. For lower tIoU values, we achieve even more significant improvements on both datasets.
Fichier principal
Vignette du fichier
Vaudaux-Ruth_SALAD_Self-Assessment_Learning_for_Action_Detection_WACV_2021_paper.pdf (1.79 Mo) Télécharger le fichier
Origin : Files produced by the author(s)

Dates and versions

hal-03156960 , version 1 (02-03-2021)

Identifiers

  • HAL Id : hal-03156960 , version 1

Cite

Guillaume Vaudaux-Ruth, Adrien Chan-Hon-Tong, Catherine Achard. SALAD: Self-Assessment Learning for Action Detection. IEEE/CVF Winter Conference on Applications of Computer Vision, Jan 2021, virtual, United States. ⟨hal-03156960⟩
33 View
48 Download

Share

Gmail Facebook Twitter LinkedIn More