Limits...
Articulated Non-Rigid Point Set Registration for Human Pose Estimation from 3D Sensors.

Ge S, Fan G - Sensors (Basel) (2015)

Bottom Line: We introduce a visible point extraction method to initialize a new template for the current frame from the previous frame, which effectively reduces the ambiguity and uncertainty during registration.Third, to support robust and stable pose tracking, we develop a segment volume validation technique to detect tracking failures and to re-initialize pose registration if needed.The experimental results on both benchmark 3D laser scan and depth datasets demonstrate the effectiveness of the proposed framework when compared with state-of-the-art algorithms.

View Article: PubMed Central - PubMed

Affiliation: School of Electrical and Computer Engineering, Oklahoma State University, Stillwater, OK 74078, USA. song.ge@okstate.edu.

ABSTRACT
We propose a generative framework for 3D human pose estimation that is able to operate on both individual point sets and sequential depth data. We formulate human pose estimation as a point set registration problem, where we propose three new approaches to address several major technical challenges in this research. First, we integrate two registration techniques that have a complementary nature to cope with non-rigid and articulated deformations of the human body under a variety of poses. This unique combination allows us to handle point sets of complex body motion and large pose variation without any initial conditions, as required by most existing approaches. Second, we introduce an efficient pose tracking strategy to deal with sequential depth data, where the major challenge is the incomplete data due to self-occlusions and view changes. We introduce a visible point extraction method to initialize a new template for the current frame from the previous frame, which effectively reduces the ambiguity and uncertainty during registration. Third, to support robust and stable pose tracking, we develop a segment volume validation technique to detect tracking failures and to re-initialize pose registration if needed. The experimental results on both benchmark 3D laser scan and depth datasets demonstrate the effectiveness of the proposed framework when compared with state-of-the-art algorithms.

No MeSH data available.


Some examples of segment volume validation: (a) a passing case; (b) Case I failure (invalid M1); (c) Case II failure (invalid M1 and M2) in a couple of limbs and the torso; (d) Case III failure (invalid M1 and M2 in most segments).
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4541828&req=5

f6-sensors-15-15218: Some examples of segment volume validation: (a) a passing case; (b) Case I failure (invalid M1); (c) Case II failure (invalid M1 and M2) in a couple of limbs and the torso; (d) Case III failure (invalid M1 and M2 in most segments).

Mentions: As shown in [23,49], GLTP works very well in most depth sequences we tested, but there are still three possible challenging cases for which GLTP may fail with invalid correspondence estimation, as shown in Figure 6: (1) Case I: some segments become invisible in the current frame due to the view change (e.g., the subject is making a turn from the frontal view to the side view, Figure 6b); (2) Case II: some segments suddenly reappear after being absent for some frames due to the view change (e.g., the subject is turning to the frontal view from the side-view, Figure 6c); (3) Case III: there are significant self-occlusions between two adjacent frames due to large pose variation and fast motion, which causes a large number of missing points in the target point set (e.g., the subject is making a quick high kick, Figure 6d). We will discuss how to detect these three cases by the two proposed metrics and how to remedy accordingly. The thresholds of M1 and M2 are given in the experiment.


Articulated Non-Rigid Point Set Registration for Human Pose Estimation from 3D Sensors.

Ge S, Fan G - Sensors (Basel) (2015)

Some examples of segment volume validation: (a) a passing case; (b) Case I failure (invalid M1); (c) Case II failure (invalid M1 and M2) in a couple of limbs and the torso; (d) Case III failure (invalid M1 and M2 in most segments).
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4541828&req=5

f6-sensors-15-15218: Some examples of segment volume validation: (a) a passing case; (b) Case I failure (invalid M1); (c) Case II failure (invalid M1 and M2) in a couple of limbs and the torso; (d) Case III failure (invalid M1 and M2 in most segments).
Mentions: As shown in [23,49], GLTP works very well in most depth sequences we tested, but there are still three possible challenging cases for which GLTP may fail with invalid correspondence estimation, as shown in Figure 6: (1) Case I: some segments become invisible in the current frame due to the view change (e.g., the subject is making a turn from the frontal view to the side view, Figure 6b); (2) Case II: some segments suddenly reappear after being absent for some frames due to the view change (e.g., the subject is turning to the frontal view from the side-view, Figure 6c); (3) Case III: there are significant self-occlusions between two adjacent frames due to large pose variation and fast motion, which causes a large number of missing points in the target point set (e.g., the subject is making a quick high kick, Figure 6d). We will discuss how to detect these three cases by the two proposed metrics and how to remedy accordingly. The thresholds of M1 and M2 are given in the experiment.

Bottom Line: We introduce a visible point extraction method to initialize a new template for the current frame from the previous frame, which effectively reduces the ambiguity and uncertainty during registration.Third, to support robust and stable pose tracking, we develop a segment volume validation technique to detect tracking failures and to re-initialize pose registration if needed.The experimental results on both benchmark 3D laser scan and depth datasets demonstrate the effectiveness of the proposed framework when compared with state-of-the-art algorithms.

View Article: PubMed Central - PubMed

Affiliation: School of Electrical and Computer Engineering, Oklahoma State University, Stillwater, OK 74078, USA. song.ge@okstate.edu.

ABSTRACT
We propose a generative framework for 3D human pose estimation that is able to operate on both individual point sets and sequential depth data. We formulate human pose estimation as a point set registration problem, where we propose three new approaches to address several major technical challenges in this research. First, we integrate two registration techniques that have a complementary nature to cope with non-rigid and articulated deformations of the human body under a variety of poses. This unique combination allows us to handle point sets of complex body motion and large pose variation without any initial conditions, as required by most existing approaches. Second, we introduce an efficient pose tracking strategy to deal with sequential depth data, where the major challenge is the incomplete data due to self-occlusions and view changes. We introduce a visible point extraction method to initialize a new template for the current frame from the previous frame, which effectively reduces the ambiguity and uncertainty during registration. Third, to support robust and stable pose tracking, we develop a segment volume validation technique to detect tracking failures and to re-initialize pose registration if needed. The experimental results on both benchmark 3D laser scan and depth datasets demonstrate the effectiveness of the proposed framework when compared with state-of-the-art algorithms.

No MeSH data available.