Advancing Human Computation for Emotion Detection

November 14, 2015
Duration: One semester (February – June 2015)
Goals: Design a human computation task that would allow collecting affective knowledge of better quality, and develop the evaluation techniques to quantify the impact of the task design.
Responsible TA: Valentina Sintsova and Pearl Pu
Student Name: Séphora Madjiheurem
Keywords: Human Computation, Emotion Recognition, Crowdsourcing, Amazon Mechanical Turk, Experiment Design, Quantitative Evaluation Techniques
Abstract:

Social media are filled with emotional content, which many researchers and companies seek to analyze. However, automatic methods for emotion recognition are far from the level of human ability to understand emotion language. Human computation techniques are seen as a way to help machines learn how to detect emotions. Online labor platforms such as Amazon Mechanical Turk allow to use individual humans to obtain answers to such judgment tasks as emotion detection in text. One strategy to obtain quality answers is to combine answers from different workers. Yet, in order to make use of the wisdom of the crowd, human answers must be comparable. This can be achieved by providing clear instructions and designing tutorials for the task. Moreover, if the human computation task is subject to systematic bias, using multiple workers is not enough to obtain quality answers. In this project, two experiments were conducted in an online labor platform. The first experiment aimed to evaluate the impact of tutorials on the quality of the answers provided by workers and on their engagement in the task. The second experiment was focused on comparing the workers’ output quality and engagement when using different incentives for motivating workers. The results show that tutorials with limited instruction do not necessarily lead to poorer performance. The results also demonstrate better quality work from workers under certain treatment conditions for motivation depending on the difficulty of the task.