Limits...
Robust Eye Center Localization through Face Alignment and Invariant Isocentric Patterns.

Pang Z, Wei C, Teng D, Chen D, Tan H - PLoS ONE (2015)

Bottom Line: In addition, we apply our method to non-frontal faces to test its robustness and accuracy, which are essential in gaze estimation but have seldom been mentioned in previous works.Through extensive experimentation, we show that the proposed method can achieve a significant improvement in accuracy and robustness over state-of-the-art techniques, with our method ranking second in terms of accuracy.According to our implementation on a PC with a Xeon 2.5Ghz CPU, the frame rate of the eye tracking process can achieve 38 Hz.

View Article: PubMed Central - PubMed

Affiliation: School of Physics and Engineering, Sun Yat-Sen University, Guangzhou, China.

ABSTRACT
The localization of eye centers is a very useful cue for numerous applications like face recognition, facial expression recognition, and the early screening of neurological pathologies. Several methods relying on available light for accurate eye-center localization have been exploited. However, despite the considerable improvements that eye-center localization systems have undergone in recent years, only few of these developments deal with the challenges posed by the profile (non-frontal face). In this paper, we first use the explicit shape regression method to obtain the rough location of the eye centers. Because this method extracts global information from the human face, it is robust against any changes in the eye region. We exploit this robustness and utilize it as a constraint. To locate the eye centers accurately, we employ isophote curvature features, the accuracy of which has been demonstrated in a previous study. By applying these features, we obtain a series of eye-center locations which are candidates for the actual position of the eye-center. Among these locations, the estimated locations which minimize the reconstruction error between the two methods mentioned above are taken as the closest approximation for the eye centers locations. Therefore, we combine explicit shape regression and isophote curvature feature analysis to achieve robustness and accuracy, respectively. In practical experiments, we use BioID and FERET datasets to test our approach to obtaining an accurate eye-center location while retaining robustness against changes in scale and pose. In addition, we apply our method to non-frontal faces to test its robustness and accuracy, which are essential in gaze estimation but have seldom been mentioned in previous works. Through extensive experimentation, we show that the proposed method can achieve a significant improvement in accuracy and robustness over state-of-the-art techniques, with our method ranking second in terms of accuracy. According to our implementation on a PC with a Xeon 2.5Ghz CPU, the frame rate of the eye tracking process can achieve 38 Hz.

No MeSH data available.


Related in: MedlinePlus

Flowchart of proposed eye-center location method.First, a facial detection process is applied, and then a series of facial landmarks are initiated using a facial bounding box. Regression ferns are applied to achieve facial alignment and the eye center location is estimated. A number of positions are selected as potential eye-center location candidates. Using the facial alignment results, the most likely eye centers are then determined.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4591357&req=5

pone.0139098.g001: Flowchart of proposed eye-center location method.First, a facial detection process is applied, and then a series of facial landmarks are initiated using a facial bounding box. Regression ferns are applied to achieve facial alignment and the eye center location is estimated. A number of positions are selected as potential eye-center location candidates. Using the facial alignment results, the most likely eye centers are then determined.

Mentions: In this section, we describe the proposed eye center localization system. A flowchart of the algorithmic procedures is shown in Fig 1. A face detection process [31] is first applied to the test image. With the bounding box obtained through the face detection, we initialize a series of facial landmarks. Next, we apply regression ferns to obtain a face alignment, a method which was described in [29]. Since this method applies the global information of human faces, it is more robust than the unsupervised method which only analyzes the information in eyes region. We take this estimation as a constraint to select the most likely eye center location in the next step. Meanwhile, we estimate the eye-center location using the method described in [18]. This method applies the isocentric patterns to estimate the eye center. To improve its estimation, different from [18] and [28], we generate Gaussian pyramid from the test image and select eye center locations in every patches of Gaussian pyramid as estimated eye-center candidates. The candidates of eye center locations reflect the estimation at different scales which achieve scale invariance in our method. Finally, combining results from both methods, we adopt the AISC [30] to reconstruct the facial landmarks and the candidate which minimizes the reconstruction error is taken as the eye center. This method has two advantages: (1) based on AISC, a series of landmarks estimated from the face alignments are used to codetermine the location which is more robust, (2) the eye center is selected from the candidates, which means we take the face alignment as a constraint and retain the accuracy achieved by unsupervised methods. The following subsections will explain the implementation specifics of each of the algorithmic procedures involved in this process in detail.


Robust Eye Center Localization through Face Alignment and Invariant Isocentric Patterns.

Pang Z, Wei C, Teng D, Chen D, Tan H - PLoS ONE (2015)

Flowchart of proposed eye-center location method.First, a facial detection process is applied, and then a series of facial landmarks are initiated using a facial bounding box. Regression ferns are applied to achieve facial alignment and the eye center location is estimated. A number of positions are selected as potential eye-center location candidates. Using the facial alignment results, the most likely eye centers are then determined.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4591357&req=5

pone.0139098.g001: Flowchart of proposed eye-center location method.First, a facial detection process is applied, and then a series of facial landmarks are initiated using a facial bounding box. Regression ferns are applied to achieve facial alignment and the eye center location is estimated. A number of positions are selected as potential eye-center location candidates. Using the facial alignment results, the most likely eye centers are then determined.
Mentions: In this section, we describe the proposed eye center localization system. A flowchart of the algorithmic procedures is shown in Fig 1. A face detection process [31] is first applied to the test image. With the bounding box obtained through the face detection, we initialize a series of facial landmarks. Next, we apply regression ferns to obtain a face alignment, a method which was described in [29]. Since this method applies the global information of human faces, it is more robust than the unsupervised method which only analyzes the information in eyes region. We take this estimation as a constraint to select the most likely eye center location in the next step. Meanwhile, we estimate the eye-center location using the method described in [18]. This method applies the isocentric patterns to estimate the eye center. To improve its estimation, different from [18] and [28], we generate Gaussian pyramid from the test image and select eye center locations in every patches of Gaussian pyramid as estimated eye-center candidates. The candidates of eye center locations reflect the estimation at different scales which achieve scale invariance in our method. Finally, combining results from both methods, we adopt the AISC [30] to reconstruct the facial landmarks and the candidate which minimizes the reconstruction error is taken as the eye center. This method has two advantages: (1) based on AISC, a series of landmarks estimated from the face alignments are used to codetermine the location which is more robust, (2) the eye center is selected from the candidates, which means we take the face alignment as a constraint and retain the accuracy achieved by unsupervised methods. The following subsections will explain the implementation specifics of each of the algorithmic procedures involved in this process in detail.

Bottom Line: In addition, we apply our method to non-frontal faces to test its robustness and accuracy, which are essential in gaze estimation but have seldom been mentioned in previous works.Through extensive experimentation, we show that the proposed method can achieve a significant improvement in accuracy and robustness over state-of-the-art techniques, with our method ranking second in terms of accuracy.According to our implementation on a PC with a Xeon 2.5Ghz CPU, the frame rate of the eye tracking process can achieve 38 Hz.

View Article: PubMed Central - PubMed

Affiliation: School of Physics and Engineering, Sun Yat-Sen University, Guangzhou, China.

ABSTRACT
The localization of eye centers is a very useful cue for numerous applications like face recognition, facial expression recognition, and the early screening of neurological pathologies. Several methods relying on available light for accurate eye-center localization have been exploited. However, despite the considerable improvements that eye-center localization systems have undergone in recent years, only few of these developments deal with the challenges posed by the profile (non-frontal face). In this paper, we first use the explicit shape regression method to obtain the rough location of the eye centers. Because this method extracts global information from the human face, it is robust against any changes in the eye region. We exploit this robustness and utilize it as a constraint. To locate the eye centers accurately, we employ isophote curvature features, the accuracy of which has been demonstrated in a previous study. By applying these features, we obtain a series of eye-center locations which are candidates for the actual position of the eye-center. Among these locations, the estimated locations which minimize the reconstruction error between the two methods mentioned above are taken as the closest approximation for the eye centers locations. Therefore, we combine explicit shape regression and isophote curvature feature analysis to achieve robustness and accuracy, respectively. In practical experiments, we use BioID and FERET datasets to test our approach to obtaining an accurate eye-center location while retaining robustness against changes in scale and pose. In addition, we apply our method to non-frontal faces to test its robustness and accuracy, which are essential in gaze estimation but have seldom been mentioned in previous works. Through extensive experimentation, we show that the proposed method can achieve a significant improvement in accuracy and robustness over state-of-the-art techniques, with our method ranking second in terms of accuracy. According to our implementation on a PC with a Xeon 2.5Ghz CPU, the frame rate of the eye tracking process can achieve 38 Hz.

No MeSH data available.


Related in: MedlinePlus