Paper

Person–Independent Computational Recognition of Emotions Elicited Using Japanese Kanji Words: Machine Learning Approach Using Multimodal Physiological Signals


Authors:
Kazuhiko Takahashi; Shin-ya Namikawa; Masafumi Hashimoto
Abstract
In this paper, person–independent computational emotion recognition using multimodal physiological signals - in this study, plethysmogram, skin conductance change, respiration rate and skin temperature - is investigated. Psychophysical experiments are conducted using Japanese kanji words in order to excite three emotions, such as positive, negative and neutral, in subjects and thus elicit physiological signals. The concept of machine learning approaches, such as multilayer neural networks, support vector machines, decision trees and random forests, is used to design emotion recognition systems, and the recognition systems are trained and tested using gathered data under the psychophysical experiments to investigate their characteristics. In experiments of computational emotion recognition, the maximum average recognition rates are 38% using multilayer neural networks, 40% using support vector machines equipped with a Gaussian kernel function, 37% using decision trees and 33% using random forests for all three emotions. The results of the emotion recognition experiments show that using multimodal physiological signals with a machine learning approach is feasible and appropriate for person–independent computational emotion recognition.
Keywords
Emotion Recognition; Japanese Kanji Words; Physiological Signal; Neural Networks; Support Vector Machines; Decision Trees; Random Forests
StartPage
149
EndPage
160
Doi
Download | Back to Issue| Archive