ECE Seminar: Paralinguistic speech attribute recognition and multimodal behavior signal analysis
This event has passed.
Monday, November 6, 2017 - 12:00pm to 1:00pm
Ming Li, Associate Professor, SYSU-CMU Joint Institute of Engineering, School of Electronics and Information Technology, Sun Yat-sen University
Speech signal not only contains lexicon information, but also deliver various kinds of paralinguistic speech attribute information, such as speaker, language, gender, age, emotion, channel, voicing, psychological states, etc. The core technique question behind it is utterance level supervised learning based on text independent speech signal with flexible duration. I will select speaker verification as an example to introduce the framework of paralinguistic speech attribute recognition. Moreover, we can extend the signal from speech to multimodal human centered behavior data. I will introduce our works on multimodal behavior signal analysis and interpretation. We apply signal processing and machine learning technologies to human behavior signals, such as audio, visual, physiological and eye tracking data to provide objective and quantitative behavior measurements or codes. Example studies include autism spectrum disorder, obesity, biometrics and piano learning.