On leveraging crowdsourced data for automatic perceived stress detection
Abstract
Resorting to crowdsourcing platforms is a popular way to obtain annotations. Multiple potentially noisy answers can thus be aggregated to retrieve an underlying ground truth. However, it may be irrelevant to look for a unique ground truth when we ask crowd workers for opinions, notably when dealing with subjective phenomena such as stress. In this paper, we discuss how we can better use crowdsourced annotations with an application to automatic detection of perceived stress. Towards this aim, we first acquired video data from 44 subjects in a stressful situation and gathered answers to a binary question using a crowdsourcing platform. Then, we propose to integrate two measures derived from the set of gathered answers into the machine learning framework. First, we highlight that using the consensus level among crowd worker answers substantially increases classification accuracies. Then, we show that it is suitable to directly predict for each video the proportion of positive answers to the question from the different crowd workers. Hence, we propose a thorough study on how crowdsourced annotations can be used to enhance performance of classification and regression methods.