Limits...
Articulated Non-Rigid Point Set Registration for Human Pose Estimation from 3D Sensors.

Ge S, Fan G - Sensors (Basel) (2015)

Bottom Line: We introduce a visible point extraction method to initialize a new template for the current frame from the previous frame, which effectively reduces the ambiguity and uncertainty during registration.Third, to support robust and stable pose tracking, we develop a segment volume validation technique to detect tracking failures and to re-initialize pose registration if needed.The experimental results on both benchmark 3D laser scan and depth datasets demonstrate the effectiveness of the proposed framework when compared with state-of-the-art algorithms.

View Article: PubMed Central - PubMed

Affiliation: School of Electrical and Computer Engineering, Oklahoma State University, Stillwater, OK 74078, USA. song.ge@okstate.edu.

ABSTRACT
We propose a generative framework for 3D human pose estimation that is able to operate on both individual point sets and sequential depth data. We formulate human pose estimation as a point set registration problem, where we propose three new approaches to address several major technical challenges in this research. First, we integrate two registration techniques that have a complementary nature to cope with non-rigid and articulated deformations of the human body under a variety of poses. This unique combination allows us to handle point sets of complex body motion and large pose variation without any initial conditions, as required by most existing approaches. Second, we introduce an efficient pose tracking strategy to deal with sequential depth data, where the major challenge is the incomplete data due to self-occlusions and view changes. We introduce a visible point extraction method to initialize a new template for the current frame from the previous frame, which effectively reduces the ambiguity and uncertainty during registration. Third, to support robust and stable pose tracking, we develop a segment volume validation technique to detect tracking failures and to re-initialize pose registration if needed. The experimental results on both benchmark 3D laser scan and depth datasets demonstrate the effectiveness of the proposed framework when compared with state-of-the-art algorithms.

No MeSH data available.


Related in: MedlinePlus

The illustration of the proposed segment-aware AICP (SAICP)-based registration algorithm. (a) Two examples to construct the rigid body part: selecting a single segment (red area) or several connected segments (blue area), which cannot be supported by the original AICP algorithm; (b) One example of transformation estimation of the left arm. (1) The template (green) and target (red) models; (2) The result of upper-arm deformation; (3) The result of lower-arm deformation; (4) The result of whole-arm deformation.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4541828&req=5

f7-sensors-15-15218: The illustration of the proposed segment-aware AICP (SAICP)-based registration algorithm. (a) Two examples to construct the rigid body part: selecting a single segment (red area) or several connected segments (blue area), which cannot be supported by the original AICP algorithm; (b) One example of transformation estimation of the left arm. (1) The template (green) and target (red) models; (2) The result of upper-arm deformation; (3) The result of lower-arm deformation; (4) The result of whole-arm deformation.

Mentions: The original AICP algorithm in [16] adopts a divide-and-conquer strategy to iteratively estimate an articulated structure by assuming that it is partially rigid. In each iteration, the articulated structure is split into two parts by a joint, which is selected randomly or cyclically; then, the classic rigid ICP is performed locally on one of these two parts. AICP works effectively when the template and target have similar segmental configurations (i.e., similar poses), which may not be true in human pose estimation. In our case, given reliable correspondence estimation by GLTP, we follow a more flexible and efficient scheme to construct a partial rigid body part by selecting single or several connected segments. We develop a new segment-aware AICP (SAICP) algorithm to find the rigid transformations for all segments by optimizing Equation (11) in a way that reflects segment-level articulated motion. The main idea is to take advantage of GLTP's output by starting from the root (the torso) and head, which are relatively stable, and then following along the tree-structured skeleton according to the connectivity between segments, as shown in Figure 7a. This allows us to treat the limbs in a particular order, upper, lower and whole, as shown in Figure 7b, and it is efficient to update the rigid transformations of four limbs simultaneously. It is worth mentioning that the correspondences at each segment will be updated during each iteration when the segment label information of X̂ and Ẑ is also used for the minimum distance search.


Articulated Non-Rigid Point Set Registration for Human Pose Estimation from 3D Sensors.

Ge S, Fan G - Sensors (Basel) (2015)

The illustration of the proposed segment-aware AICP (SAICP)-based registration algorithm. (a) Two examples to construct the rigid body part: selecting a single segment (red area) or several connected segments (blue area), which cannot be supported by the original AICP algorithm; (b) One example of transformation estimation of the left arm. (1) The template (green) and target (red) models; (2) The result of upper-arm deformation; (3) The result of lower-arm deformation; (4) The result of whole-arm deformation.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4541828&req=5

f7-sensors-15-15218: The illustration of the proposed segment-aware AICP (SAICP)-based registration algorithm. (a) Two examples to construct the rigid body part: selecting a single segment (red area) or several connected segments (blue area), which cannot be supported by the original AICP algorithm; (b) One example of transformation estimation of the left arm. (1) The template (green) and target (red) models; (2) The result of upper-arm deformation; (3) The result of lower-arm deformation; (4) The result of whole-arm deformation.
Mentions: The original AICP algorithm in [16] adopts a divide-and-conquer strategy to iteratively estimate an articulated structure by assuming that it is partially rigid. In each iteration, the articulated structure is split into two parts by a joint, which is selected randomly or cyclically; then, the classic rigid ICP is performed locally on one of these two parts. AICP works effectively when the template and target have similar segmental configurations (i.e., similar poses), which may not be true in human pose estimation. In our case, given reliable correspondence estimation by GLTP, we follow a more flexible and efficient scheme to construct a partial rigid body part by selecting single or several connected segments. We develop a new segment-aware AICP (SAICP) algorithm to find the rigid transformations for all segments by optimizing Equation (11) in a way that reflects segment-level articulated motion. The main idea is to take advantage of GLTP's output by starting from the root (the torso) and head, which are relatively stable, and then following along the tree-structured skeleton according to the connectivity between segments, as shown in Figure 7a. This allows us to treat the limbs in a particular order, upper, lower and whole, as shown in Figure 7b, and it is efficient to update the rigid transformations of four limbs simultaneously. It is worth mentioning that the correspondences at each segment will be updated during each iteration when the segment label information of X̂ and Ẑ is also used for the minimum distance search.

Bottom Line: We introduce a visible point extraction method to initialize a new template for the current frame from the previous frame, which effectively reduces the ambiguity and uncertainty during registration.Third, to support robust and stable pose tracking, we develop a segment volume validation technique to detect tracking failures and to re-initialize pose registration if needed.The experimental results on both benchmark 3D laser scan and depth datasets demonstrate the effectiveness of the proposed framework when compared with state-of-the-art algorithms.

View Article: PubMed Central - PubMed

Affiliation: School of Electrical and Computer Engineering, Oklahoma State University, Stillwater, OK 74078, USA. song.ge@okstate.edu.

ABSTRACT
We propose a generative framework for 3D human pose estimation that is able to operate on both individual point sets and sequential depth data. We formulate human pose estimation as a point set registration problem, where we propose three new approaches to address several major technical challenges in this research. First, we integrate two registration techniques that have a complementary nature to cope with non-rigid and articulated deformations of the human body under a variety of poses. This unique combination allows us to handle point sets of complex body motion and large pose variation without any initial conditions, as required by most existing approaches. Second, we introduce an efficient pose tracking strategy to deal with sequential depth data, where the major challenge is the incomplete data due to self-occlusions and view changes. We introduce a visible point extraction method to initialize a new template for the current frame from the previous frame, which effectively reduces the ambiguity and uncertainty during registration. Third, to support robust and stable pose tracking, we develop a segment volume validation technique to detect tracking failures and to re-initialize pose registration if needed. The experimental results on both benchmark 3D laser scan and depth datasets demonstrate the effectiveness of the proposed framework when compared with state-of-the-art algorithms.

No MeSH data available.


Related in: MedlinePlus