Skip to Main content Skip to Navigation
Conference papers

SALAD: Self-Assessment Learning for Action Detection

Abstract : Literature on self-assessment in machine learning mainly focuses on the production of well-calibrated algorithms through consensus frameworks i.e. calibration is seen as a problem. Yet, we observe that learning to be properly confident could behave like a powerful regularization and thus, could be an opportunity to improve performance. Precisely, we show that used within a framework of action detection, the learning of a self-assessment score is able to improve the whole action localization process. Experimental results show that our approach outperforms the state-of-the-art on two action detection benchmarks. On THUMOS14 dataset, the mAP at tIoU @0.5 is improved from 42.8% to 44.6%, and from 50.4% to 51.7% on Activ-ityNet1.3 dataset. For lower tIoU values, we achieve even more significant improvements on both datasets.
Document type :
Conference papers
Complete list of metadata
Contributor : Catherine Achard Connect in order to contact the contributor
Submitted on : Tuesday, March 2, 2021 - 6:32:11 PM
Last modification on : Monday, June 14, 2021 - 10:24:37 AM
Long-term archiving on: : Monday, May 31, 2021 - 7:42:54 PM


Files produced by the author(s)


  • HAL Id : hal-03156960, version 1


Guillaume Vaudaux-Ruth, Adrien Chan-Hon-Tong, Catherine Achard. SALAD: Self-Assessment Learning for Action Detection. IEEE/CVF Winter Conference on Applications of Computer Vision, Jan 2021, virtual, United States. ⟨hal-03156960⟩



Record views


Files downloads