Compress to Create - Sorbonne Université Accéder directement au contenu
Pré-Publication, Document De Travail Année : 2020

Compress to Create

Jean-Pierre Briot

Résumé

The current tsunami of deep learning has already conquered new areas, such as the generation of creative musical content. The motivation is in using the capacity of modern deep learning architectures and associated training and generation techniques to automatically learn musical styles from arbitrary musical corpora and then to generate musical samples from the estimated distribution, with some degree of control over the generation. In this article, we analyze the use of autoencoder architectures and how their ability for compressing information turns out to be an interesting source for generation. Autoencoders are good at representation learning, that is at extracting a compressed and abstract representation (a set of latent variables) common to the set of training examples. By choosing various instances of this abstract representation (i.e., by sampling the latent variables), one may efficiently generate various instances within the style which has been learnt. Furthermore, one may use more sophisticated ways of controlling the generation, like interpolation, recursion, or objective optimization, as will be illustrated by various examples.
Fichier principal
Vignette du fichier
compress-generate-musmat2020.pdf (3.49 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-02567390 , version 1 (07-05-2020)
hal-02567390 , version 2 (09-05-2020)
hal-02567390 , version 3 (12-05-2020)
hal-02567390 , version 4 (20-05-2020)

Identifiants

  • HAL Id : hal-02567390 , version 2

Citer

Jean-Pierre Briot. Compress to Create. 2020. ⟨hal-02567390v2⟩
207 Consultations
82 Téléchargements

Partager

Gmail Facebook X LinkedIn More