TR2016-129
Robust Face Alignment Using a Mixture of Invariant Experts
-
- "Robust Face Alignment Using a Mixture of Invariant Experts", European Conference on Computer Vision (ECCV), DOI: 10.1007/978-3-319-46454-1_50, October 2016, vol. 9909, pp. 825-841.BibTeX TR2016-129 PDF
- @inproceedings{Tuzel2016oct,
- author = {Tuzel, C. Oncel and Marks, Tim K. and Tambe, Salil},
- title = {Robust Face Alignment Using a Mixture of Invariant Experts},
- booktitle = {European Conference on Computer Vision (ECCV)},
- year = 2016,
- volume = 9909,
- pages = {825--841},
- month = oct,
- doi = {10.1007/978-3-319-46454-1_50},
- url = {https://www.merl.com/publications/TR2016-129}
- }
,
- "Robust Face Alignment Using a Mixture of Invariant Experts", European Conference on Computer Vision (ECCV), DOI: 10.1007/978-3-319-46454-1_50, October 2016, vol. 9909, pp. 825-841.
-
MERL Contact:
-
Research Areas:
Abstract:
Face alignment, which is the task of finding the locations of a set of facial landmark points in an image of a face, is useful in widespread application areas. Face alignment is particularly challenging when there are large variations in pose (in-plane and out-of-plane rotations) and facial expression. To address this issue, we propose a cascade in which each stage consists of a mixture of regression experts. Each expert learns a customized regression model that is specialized to a different subset of the joint space of pose and expressions. The system is invariant to a predefined class of transformations (e.g., affine), because the input is transformed to match each expert's prototype shape before the regression is applied. We also present a method to include deformation constraints within the discriminative alignment framework, which makes our algorithm more robust. Our algorithm significantly outperforms previous methods on publicly available face alignment datasets.
Related News & Events
-
NEWS Tim Marks to give invited Keynote talk at AMFG 2017 Workshop, at ICCV 2017 Date: October 28, 2017
Where: Venice, Italy
MERL Contact: Tim K. Marks
Research Area: Machine LearningBrief- MERL Senior Principal Research Scientist Tim K. Marks will give an invited keynote talk at the 2017 IEEE Workshop on Analysis and Modeling of Faces and Gestures (AMFG 2017). The workshop will take place On October 28, 2017, at the International Conference on Computer Vision (ICCV 2017) in Venice, Italy.
-
EVENT Tim Marks to give lunch talk at Face and Gesture 2017 conference Date: Thursday, June 1, 2017
Location: IEEE Conference on Automatic Face and Gesture Recognition (FG 2017), Washington, DC
Speaker: Tim K. Marks
MERL Contact: Tim K. Marks
Research Area: Machine LearningBrief- MERL Senior Principal Research Scientist Tim K. Marks will give the invited lunch talk on Thursday, June 1, at the IEEE International Conference on Automatic Face and Gesture Recognition (FG 2017). The talk is entitled "Robust Real-Time 3D Head Pose and 2D Face Alignment.".
-
NEWS MERL Researcher Tim Marks presents an invited talk at MIT Lincoln Laboratory Date: April 27, 2017
Where: Lincoln Laboratory, Massachusetts Institute of Technology
MERL Contact: Tim K. Marks
Research Area: Machine LearningBrief- MERL researcher Tim K. Marks presented an invited talk as part of the MIT Lincoln Laboratory CORE Seminar Series on Biometrics. The talk was entitled "Robust Real-Time 2D Face Alignment and 3D Head Pose Estimation."
Abstract: Head pose estimation and facial landmark localization are key technologies, with widespread application areas including biometrics and human-computer interfaces. This talk describes two different robust real-time face-processing methods, each using a different modality of input image. The first part of the talk describes our system for 3D head pose estimation and facial landmark localization using a commodity depth sensor. The method is based on a novel 3D Triangular Surface Patch (TSP) descriptor, which is viewpoint-invariant as well as robust to noise and to variations in the data resolution. This descriptor, combined with fast nearest-neighbor lookup and a joint voting scheme, enable our system to handle arbitrary head pose and significant occlusions. The second part of the talk describes our method for face alignment, which is the localization of a set of facial landmark points in a 2D image or video of a face. Face alignment is particularly challenging when there are large variations in pose (in-plane and out-of-plane rotations) and facial expression. To address this issue, we propose a cascade in which each stage consists of a Mixture of Invariant eXperts (MIX), where each expert learns a regression model that is specialized to a different subset of the joint space of pose and expressions. We also present a method to include deformation constraints within the discriminative alignment framework, which makes the algorithm more robust. Both our 3D head pose and 2D face alignment methods outperform the previous results on standard datasets. If permitted, I plan to end the talk with a live demonstration.
- MERL researcher Tim K. Marks presented an invited talk as part of the MIT Lincoln Laboratory CORE Seminar Series on Biometrics. The talk was entitled "Robust Real-Time 2D Face Alignment and 3D Head Pose Estimation."
-
NEWS MERL researcher Tim Marks presents invited talk at University of Utah Date: April 10, 2017
Where: University of Utah School of Computing
MERL Contact: Tim K. Marks
Research Area: Machine LearningBrief- MERL researcher Tim K. Marks presented an invited talk at the University of Utah School of Computing, entitled "Action Detection from Video and Robust Real-Time 2D Face Alignment."
Abstract: The first part of the talk describes our multi-stream bi-directional recurrent neural network for action detection from video. In addition to a two-stream convolutional neural network (CNN) on full-frame appearance (images) and motion (optical flow), our system trains two additional streams on appearance and motion that have been cropped to a bounding box from a person tracker. To model long-term temporal dynamics within and between actions, the multi-stream CNN is followed by a bi-directional Long Short-Term Memory (LSTM) layer. Our method outperforms the previous state of the art on two action detection datasets: the MPII Cooking 2 Dataset, and a new MERL Shopping Dataset that we have made available to the community. The second part of the talk describes our method for face alignment, which is the localization of a set of facial landmark points in a 2D image or video of a face. Face alignment is particularly challenging when there are large variations in pose (in-plane and out-of-plane rotations) and facial expression. To address this issue, we propose a cascade in which each stage consists of a Mixture of Invariant eXperts (MIX), where each expert learns a regression model that is specialized to a different subset of the joint space of pose and expressions. We also present a method to include deformation constraints within the discriminative alignment framework, which makes the algorithm more robust. Our face alignment system outperforms the previous results on standard datasets. The talk will end with a live demo of our face alignment system.
- MERL researcher Tim K. Marks presented an invited talk at the University of Utah School of Computing, entitled "Action Detection from Video and Robust Real-Time 2D Face Alignment."