Date of Award


Publication Type


Degree Name



Computer Science


Xiaobu Yuan



Creative Commons License

Creative Commons Attribution 4.0 International License
This work is licensed under a Creative Commons Attribution 4.0 International License.


E-Tutoring systems have emerged as effective tools for remote learning and personalized education. However, to foster engaging and interactive experiences, there is a need to enhance the communication and expressiveness of virtual tutors within these systems. This research focuses on integrating facial expression animation and lip-syncing capabilities into an e-tutoring system, aiming to improve the realism and effectiveness of virtual tutor interactions. The study presents a novel approach to animating the facial expressions of the virtual tutor’s avatar using computer graphics, 3D animation, and computer vision techniques. Working with 3D files and using them to interpolate each face vertex are employed to track and map the facial movements onto the avatar accurately. This enables the virtual tutor to display various facial expressions, including happiness, surprise, and concern, to convey emotions and engage with the learner effectively. Additionally, the system incorporates lip-syncing techniques to synchronize the avatar’s lip movements with the spoken content. By analyzing the audio input, the system identifies the phonemes and maps them to the corresponding facial movements, ensuring realistic lip-syncing and enhancing the perception of natural speech.