User-guided one-shot deep model adaptation for music source separation - Equipe Signal, Statistique et Apprentissage Accéder directement au contenu
Pré-Publication, Document De Travail Année : 2021

User-guided one-shot deep model adaptation for music source separation

Résumé

Music source separation is the task of isolating individual instruments which are mixed in a musical piece. This task is particularly challenging, and even state-of-the-art models can hardly generalize to unseen test data. Nevertheless, prior knowledge about individual sources can be used to better adapt a generic source separation model to the observed signal. In this work, we propose to exploit a temporal segmentation provided by the user, that indicates when each instrument is active, in order to fine-tune a pre-trained deep model for source separation and adapt it to one specific mixture. This paradigm can be referred to as user-guided one-shot deep model adaptation for music source separation, as the adaptation acts on the target song instance only. Our results are promising and show that state-of-the-art source separation models have large margins of improvement especially for those instruments which are underrepresented in the training data.
Fichier principal
Vignette du fichier
WASPAA2021_Hal.pdf (3.17 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03219350 , version 1 (06-05-2021)
hal-03219350 , version 2 (02-06-2021)
hal-03219350 , version 3 (29-07-2021)

Identifiants

  • HAL Id : hal-03219350 , version 1

Citer

Giorgia Cantisani, Alexey Ozerov, Slim Essid, Gael Richard. User-guided one-shot deep model adaptation for music source separation. 2021. ⟨hal-03219350v1⟩
481 Consultations
631 Téléchargements

Partager

Gmail Facebook X LinkedIn More