SEMEDA: Enhancing Segmentation Precision with Semantic Edge Aware Loss - Archive ouverte HAL Access content directly
Journal Articles Pattern Recognition Year : 2020

SEMEDA: Enhancing Segmentation Precision with Semantic Edge Aware Loss

Yifu Chen
Matthieu Cord

Abstract

Per-Pixel Cross entropy (PPCE) is a commonly used loss on semantic segmentation tasks. However, it suffers from a number of drawbacks. Firstly, PPCE only depends on the probability of the ground truth class since the latter is usually one-hot encoded. Secondly, PPCE treats all pixels independently and does not take the local structure into account. While perceptual losses (e.g. matching prediction and ground truth in the embedding space of a pre-trained VGG network) would theoretically address these concerns, it does not constitute a practical solution as segmentation masks follow a distribution that differs largely from natural images. In this paper, we introduce a SEMantic EDge-Aware strategy (SEMEDA) to solve these issues. Inspired by perceptual losses, we propose to match the ’probability texture’ of predicted segmentation mask and ground truth through a proxy network trained for semantic edge detection on the ground truth masks. Through thorough experimental validation on several datasets, we show that SEMEDA steadily improves the segmentation accuracy with negligible computational overhead and can be added with any popular segmentation networks in an end-to-end training framework.
Fichier principal
Vignette du fichier
Chen et al. - 2020 - SEMEDA Enhancing segmentation precision with sema.pdf (7.33 Mo) Télécharger le fichier
Origin : Files produced by the author(s)

Dates and versions

hal-03015764 , version 1 (20-11-2020)

Identifiers

Cite

Yifu Chen, Arnaud Dapogny, Matthieu Cord. SEMEDA: Enhancing Segmentation Precision with Semantic Edge Aware Loss. Pattern Recognition, 2020, 108, pp.107557. ⟨10.1016/j.patcog.2020.107557⟩. ⟨hal-03015764⟩
48 View
20 Download

Altmetric

Share

Gmail Facebook Twitter LinkedIn More