Limits...
Articulated Non-Rigid Point Set Registration for Human Pose Estimation from 3D Sensors.

Ge S, Fan G - Sensors (Basel) (2015)

Bottom Line: We introduce a visible point extraction method to initialize a new template for the current frame from the previous frame, which effectively reduces the ambiguity and uncertainty during registration.Third, to support robust and stable pose tracking, we develop a segment volume validation technique to detect tracking failures and to re-initialize pose registration if needed.The experimental results on both benchmark 3D laser scan and depth datasets demonstrate the effectiveness of the proposed framework when compared with state-of-the-art algorithms.

View Article: PubMed Central - PubMed

Affiliation: School of Electrical and Computer Engineering, Oklahoma State University, Stillwater, OK 74078, USA. song.ge@okstate.edu.

ABSTRACT
We propose a generative framework for 3D human pose estimation that is able to operate on both individual point sets and sequential depth data. We formulate human pose estimation as a point set registration problem, where we propose three new approaches to address several major technical challenges in this research. First, we integrate two registration techniques that have a complementary nature to cope with non-rigid and articulated deformations of the human body under a variety of poses. This unique combination allows us to handle point sets of complex body motion and large pose variation without any initial conditions, as required by most existing approaches. Second, we introduce an efficient pose tracking strategy to deal with sequential depth data, where the major challenge is the incomplete data due to self-occlusions and view changes. We introduce a visible point extraction method to initialize a new template for the current frame from the previous frame, which effectively reduces the ambiguity and uncertainty during registration. Third, to support robust and stable pose tracking, we develop a segment volume validation technique to detect tracking failures and to re-initialize pose registration if needed. The experimental results on both benchmark 3D laser scan and depth datasets demonstrate the effectiveness of the proposed framework when compared with state-of-the-art algorithms.

No MeSH data available.


(a) Relative position between the camera and the 3D template; (b) The inverted points lie in the convex hull; (c) The extracted visible points; (d) The invisible points.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4541828&req=5

f4-sensors-15-15218: (a) Relative position between the camera and the 3D template; (b) The inverted points lie in the convex hull; (c) The extracted visible points; (d) The invisible points.

Mentions: Visible point extraction is important to support depth map-based pose estimation, especially in the case of sequential depth data. This step requires the relative position between the full-body template model and the camera. In this work, we use the hidden point removal (HPR) operator [43] to detect visible points of a given template model. Given a point set A = {ai} and the viewpoint C (camera position), the HPR operator mainly has two steps to determine ∀ai ∈ A whether ai is visible from C. In the first step, we associate with A a coordinate system and set C as the origin. Then, we find the inverted point of each ai using spherical flipping [44] with the following equation:(3)a^i=ai+2(R−‖ai‖)ai‖ai‖where R is the radius of a sphere, which is constrained to include all ai. We denote the set of inverted points by  = {âi}. In the second step, we construct the convex hull of S =  ∪{C}. Then, we can mark a point ai, which is visible from C if its inverted point âi lies in S. An example of visible point extraction is shown in Figure 4. After this process, we can obtain the visible point set of the full-body template model that is ready to perform the registration.


Articulated Non-Rigid Point Set Registration for Human Pose Estimation from 3D Sensors.

Ge S, Fan G - Sensors (Basel) (2015)

(a) Relative position between the camera and the 3D template; (b) The inverted points lie in the convex hull; (c) The extracted visible points; (d) The invisible points.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4541828&req=5

f4-sensors-15-15218: (a) Relative position between the camera and the 3D template; (b) The inverted points lie in the convex hull; (c) The extracted visible points; (d) The invisible points.
Mentions: Visible point extraction is important to support depth map-based pose estimation, especially in the case of sequential depth data. This step requires the relative position between the full-body template model and the camera. In this work, we use the hidden point removal (HPR) operator [43] to detect visible points of a given template model. Given a point set A = {ai} and the viewpoint C (camera position), the HPR operator mainly has two steps to determine ∀ai ∈ A whether ai is visible from C. In the first step, we associate with A a coordinate system and set C as the origin. Then, we find the inverted point of each ai using spherical flipping [44] with the following equation:(3)a^i=ai+2(R−‖ai‖)ai‖ai‖where R is the radius of a sphere, which is constrained to include all ai. We denote the set of inverted points by  = {âi}. In the second step, we construct the convex hull of S =  ∪{C}. Then, we can mark a point ai, which is visible from C if its inverted point âi lies in S. An example of visible point extraction is shown in Figure 4. After this process, we can obtain the visible point set of the full-body template model that is ready to perform the registration.

Bottom Line: We introduce a visible point extraction method to initialize a new template for the current frame from the previous frame, which effectively reduces the ambiguity and uncertainty during registration.Third, to support robust and stable pose tracking, we develop a segment volume validation technique to detect tracking failures and to re-initialize pose registration if needed.The experimental results on both benchmark 3D laser scan and depth datasets demonstrate the effectiveness of the proposed framework when compared with state-of-the-art algorithms.

View Article: PubMed Central - PubMed

Affiliation: School of Electrical and Computer Engineering, Oklahoma State University, Stillwater, OK 74078, USA. song.ge@okstate.edu.

ABSTRACT
We propose a generative framework for 3D human pose estimation that is able to operate on both individual point sets and sequential depth data. We formulate human pose estimation as a point set registration problem, where we propose three new approaches to address several major technical challenges in this research. First, we integrate two registration techniques that have a complementary nature to cope with non-rigid and articulated deformations of the human body under a variety of poses. This unique combination allows us to handle point sets of complex body motion and large pose variation without any initial conditions, as required by most existing approaches. Second, we introduce an efficient pose tracking strategy to deal with sequential depth data, where the major challenge is the incomplete data due to self-occlusions and view changes. We introduce a visible point extraction method to initialize a new template for the current frame from the previous frame, which effectively reduces the ambiguity and uncertainty during registration. Third, to support robust and stable pose tracking, we develop a segment volume validation technique to detect tracking failures and to re-initialize pose registration if needed. The experimental results on both benchmark 3D laser scan and depth datasets demonstrate the effectiveness of the proposed framework when compared with state-of-the-art algorithms.

No MeSH data available.