Go home now Header Background Image
Search
Submission Procedure
share: |
 
Follow us
 
 
 
 
Volume 16 / Issue 6

available in:   PDF (1007 kB) PS (5 MB)
 
get:  
Similar Docs BibTeX   Write a comment
  
get:  
Links into Future
 
DOI:   10.3217/jucs-016-06-0903

 

3D Head Pose and Facial Expression Tracking using a Single Camera

Lucas D. Terissi (Universidad Nacional de Rosario, Argentina)

Juan C. Gómez (Universidad Nacional de Rosario, Argentina)

Abstract: Algorithms for 3D head pose and facial expression tracking using a single camera (monocular image sequences) is presented in this paper. The proposed method is based on a combination of feature-based and model-based approaches for pose estimation. A generic 3D face model, which can be adapted to any person, is used for the tracking. In contrast to other methods in the literature, the proposed method does not require a training stage. It only requires an image of the person's face to be tracked facing the camera to which the model is fitted manually through a graphical user interface. The algorithms were evaluated perceptually and quantitatively with two video databases. Simulation results show that the proposed tracking algorithms correctly estimate the head pose and facial expression, even when occlusions, changes in the distance to the camera and presence of other persons in the scene, occur. Both perceptual and quantitative results are similar to the ones obtained with other methods proposed in the literature. Although the algorithms were not optimized for speed, they run near real time. Additionally, the proposed system delivers separate head pose and facial expression information. Since information related with facial expression, which is represented only by six parameters, is independent from head pose information, the tracking algorithms could also be used for facial expression analysis and video-driven facial animation.

Keywords: 3D deformable models, computer vision, facial expression, head pose tracking, image processing

Categories: I.2.10, I.4, I.4.8