F. Cummins, Speech rhythm and rhythmic taxonomy, Speech Prosody, pp.121-126, 2002.

S. Tilsen, Multitimescale dynamical interactions between speech rhythm and gesture, Cogn. Sci, vol.33, pp.839-879, 2009.

J. R. Evans and M. Clynes, Rhythm in psychological, linguistic, and musical processes, 1986.

P. Fraisse, Les structures rythmiques :étude psychologique. Publications, 1956.

E. Zwicker and H. Fastl, Psychoacoustics: facts and models, 1990.

D. Gibbon and U. Gut, Measuring speech rhythm, pp.95-98, 2001.

K. R. Scherer, Psychological models of emotion, pp.137-162, 2000.

J. Ang, R. Dhillon, E. Schriberg, and A. Stolcke, Prosodybased automatic detection of annoyance and frustration in humancomputer dialog, Interspeech, 7th ICSLP, pp.67-79, 2002.

V. Dellwo, The role of speech rate in perceiving speech rhythm, Speech Prosody, pp.375-378, 2008.

F. Ramus, M. Nespor, and J. Mehler, Correlates of linguistic rhythm in the speech signal, Cognition -International Journal of Cognitive Science, pp.265-292, 1999.
URL : https://hal.archives-ouvertes.fr/hal-00260030

M. C. Brady and R. F. Port, Speech rhythm and rhythmic taxonomy, 16th ICPhS, pp.337-342, 2006.

V. Dellwo, Rhythm and speech rate: A variation coefficient for ?C, pp.231-241, 2006.

E. Grabe and E. Low, Durational variability in speech and the rhythm class hypothesis, Papers in laboratory phonology VII, vol.7, pp.515-546, 1977.

L. M. Smith, A multiresolution time-frequency analysis and interpretation of musical rhythm, 2000.

F. Lerdahl and R. Jackendoff, A generative theory of tonal music, 1996.

S. Tilsen and K. Johnson, Low-frequency Fourier analysis of speech rhythm, J. of Acoust. Soc. of Amer, vol.124, issue.2, pp.34-39, 2008.

F. Ringeval and M. Chetouani, Hilbert-Huang transform for nonlinear characterization of speech rhythm, NOLISP, 2009.

R. Drullman, J. M. Festen, and R. Plomp, Effect of temporal envelope smearing on speech reception, J. of Acous. Soc. of America, vol.95, pp.1053-1064, 1994.

H. Xie and Z. Wang, Mean frequency derived via Hilbert-Huang transform with application to fatigue EMG signal analysis, Computer Methods and Programs in Biomedicine, vol.82, issue.2, pp.114-120, 2006.

B. Schuller, A. Batliner, D. Seppi, S. Steidl, T. Vogt et al., The relevance of feature type for the automatic classification of emotional user states: low level descriptors and functionals, Interspeech, pp.2253-2256, 2007.

B. Schuller, M. Valstar, F. Eyben, G. Mckeown, R. Cowie et al., AVEC 2011 -The first international audio/visual emotion challenge, ACII 2011, vol.6975, pp.415-424, 2011.

F. Eyben, M. Wöllmer, and B. Schuller, The Munich versatile and fast open-source audio feature extractor, ACM Multimedia (MM), pp.1459-1462, 2010.

F. Ringeval and M. Chetouani, A vowel based approach for acted emotion recognition, pp.2763-2766, 2008.
URL : https://hal.archives-ouvertes.fr/hal-02423529