Limits...
Facial expression recognition and histograms of oriented gradients: a comprehensive study.

Carcagnì P, Del Coco M, Leo M, Distante C - Springerplus (2015)

Bottom Line: This paper proposes a comprehensive study on the application of histogram of oriented gradients (HOG) descriptor in the FER problem, highlighting as this powerful technique could be effectively exploited for this purpose.The first experimental phase was aimed at proving the suitability of the HOG descriptor to characterize facial expression traits and, to do this, a successful comparison with most commonly used FER frameworks was carried out.As a final phase, a test on continuous data streams was carried out on-line in order to validate the system in real-world operating conditions that simulated a real-time human-machine interaction.

View Article: PubMed Central - PubMed

Affiliation: National Research Council of Italy, Institute of Applied Sciences and Intelligent Systems, Via della Libertà, 3, 73010 Arnesano , LE Italy.

ABSTRACT
Automatic facial expression recognition (FER) is a topic of growing interest mainly due to the rapid spread of assistive technology applications, as human-robot interaction, where a robust emotional awareness is a key point to best accomplish the assistive task. This paper proposes a comprehensive study on the application of histogram of oriented gradients (HOG) descriptor in the FER problem, highlighting as this powerful technique could be effectively exploited for this purpose. In particular, this paper highlights that a proper set of the HOG parameters can make this descriptor one of the most suitable to characterize facial expression peculiarities. A large experimental session, that can be divided into three different phases, was carried out exploiting a consolidated algorithmic pipeline. The first experimental phase was aimed at proving the suitability of the HOG descriptor to characterize facial expression traits and, to do this, a successful comparison with most commonly used FER frameworks was carried out. In the second experimental phase, different publicly available facial datasets were used to test the system on images acquired in different conditions (e.g. image resolution, lighting conditions, etc.). As a final phase, a test on continuous data streams was carried out on-line in order to validate the system in real-world operating conditions that simulated a real-time human-machine interaction.

No MeSH data available.


Examples of expression detection performed by the proposed system: the expression evolves over time (from the top to the bottom); once the decision making rule is satisfied, the out-coming prediction is printed out. From left to right the neutral, sad and surprised expressions are shown
© Copyright Policy - OpenAccess
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4628009&req=5

Fig15: Examples of expression detection performed by the proposed system: the expression evolves over time (from the top to the bottom); once the decision making rule is satisfied, the out-coming prediction is printed out. From left to right the neutral, sad and surprised expressions are shown

Mentions: The system evaluated in this step, from a qualitative point of view, exhibited a good capacity to recognize all the emotions performed by the users with a quite low presence of false positives, thanks to the filtering performed by the temporal windows approach. Some examples of the system output are reported in Fig. 15.Fig. 15


Facial expression recognition and histograms of oriented gradients: a comprehensive study.

Carcagnì P, Del Coco M, Leo M, Distante C - Springerplus (2015)

Examples of expression detection performed by the proposed system: the expression evolves over time (from the top to the bottom); once the decision making rule is satisfied, the out-coming prediction is printed out. From left to right the neutral, sad and surprised expressions are shown
© Copyright Policy - OpenAccess
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4628009&req=5

Fig15: Examples of expression detection performed by the proposed system: the expression evolves over time (from the top to the bottom); once the decision making rule is satisfied, the out-coming prediction is printed out. From left to right the neutral, sad and surprised expressions are shown
Mentions: The system evaluated in this step, from a qualitative point of view, exhibited a good capacity to recognize all the emotions performed by the users with a quite low presence of false positives, thanks to the filtering performed by the temporal windows approach. Some examples of the system output are reported in Fig. 15.Fig. 15

Bottom Line: This paper proposes a comprehensive study on the application of histogram of oriented gradients (HOG) descriptor in the FER problem, highlighting as this powerful technique could be effectively exploited for this purpose.The first experimental phase was aimed at proving the suitability of the HOG descriptor to characterize facial expression traits and, to do this, a successful comparison with most commonly used FER frameworks was carried out.As a final phase, a test on continuous data streams was carried out on-line in order to validate the system in real-world operating conditions that simulated a real-time human-machine interaction.

View Article: PubMed Central - PubMed

Affiliation: National Research Council of Italy, Institute of Applied Sciences and Intelligent Systems, Via della Libertà, 3, 73010 Arnesano , LE Italy.

ABSTRACT
Automatic facial expression recognition (FER) is a topic of growing interest mainly due to the rapid spread of assistive technology applications, as human-robot interaction, where a robust emotional awareness is a key point to best accomplish the assistive task. This paper proposes a comprehensive study on the application of histogram of oriented gradients (HOG) descriptor in the FER problem, highlighting as this powerful technique could be effectively exploited for this purpose. In particular, this paper highlights that a proper set of the HOG parameters can make this descriptor one of the most suitable to characterize facial expression peculiarities. A large experimental session, that can be divided into three different phases, was carried out exploiting a consolidated algorithmic pipeline. The first experimental phase was aimed at proving the suitability of the HOG descriptor to characterize facial expression traits and, to do this, a successful comparison with most commonly used FER frameworks was carried out. In the second experimental phase, different publicly available facial datasets were used to test the system on images acquired in different conditions (e.g. image resolution, lighting conditions, etc.). As a final phase, a test on continuous data streams was carried out on-line in order to validate the system in real-world operating conditions that simulated a real-time human-machine interaction.

No MeSH data available.