Date of Award
2011
Publication Type
Master Thesis
Degree Name
M.Sc.
Department
Computer Science
Keywords
Computer Science.
Supervisor
Yuan, Xiaobu (School of Computer Science)
Rights
info:eu-repo/semantics/openAccess
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 4.0 International License.
Abstract
An Embodied Conversational Agent (ECA) is an intelligent agent that interacts with users through verbal and nonverbal expressions. When used as the interface of software applications, the presence of these agents creates a positive impact on user experience. Due to their potential in providing online assistance in areas such as E-Commerce, there is an increasing need to make ECAs more believable for the user, which has been achieved mainly by using realistic facial animation and emotions. This thesis presents a new approach of ECA modeling that empowers intelligent agents with synthesized emotions. This approach applies the Contextual Control Model for the construction of an emotion generator that uses information obtained from dialogue to select one of the four modes for the emotion, i.e., Scrambled, Opportunistic, Tactical, and Strategic mode. The emotions are produced in format of the Ortony Clore &Collins (OCC) model for emotion expressions.
Recommended Citation
Vijayarangan, Rajkumar, "Emotion based Facial Animation using Four Contextual Control Modes" (2011). Electronic Theses and Dissertations. 343.
https://scholar.uwindsor.ca/etd/343