Limits...
Automated analysis of craniofacial morphology using magnetic resonance images.

Chakravarty MM, Aleong R, Leonard G, Perron M, Pike GB, Richer L, Veillette S, Pausova Z, Paus T - PLoS ONE (2011)

Bottom Line: Using voxel-wise measures of expansion and contraction, we then examined the effects of sex and age on inter-individual variations in facial features.As with the voxel-wise analysis of the deformation fields, we examined the effects of sex and age on the PCA-derived spatial relationships between facial features.Both methods demonstrated significant sexual dimorphism in craniofacial structure in areas such as the chin, mandible, lips, and nose.

View Article: PubMed Central - PubMed

Affiliation: Rotman Research Institute, Baycrest, Toronto, Ontario, Canada. mchakravarty@rotman-baycrest.on.ca

ABSTRACT
Quantitative analysis of craniofacial morphology is of interest to scholars working in a wide variety of disciplines, such as anthropology, developmental biology, and medicine. T1-weighted (anatomical) magnetic resonance images (MRI) provide excellent contrast between soft tissues. Given its three-dimensional nature, MRI represents an ideal imaging modality for the analysis of craniofacial structure in living individuals. Here we describe how T1-weighted MR images, acquired to examine brain anatomy, can also be used to analyze facial features. Using a sample of typically developing adolescents from the Saguenay Youth Study (N = 597; 292 male, 305 female, ages: 12 to 18 years), we quantified inter-individual variations in craniofacial structure in two ways. First, we adapted existing nonlinear registration-based morphological techniques to generate iteratively a group-wise population average of craniofacial features. The nonlinear transformations were used to map the craniofacial structure of each individual to the population average. Using voxel-wise measures of expansion and contraction, we then examined the effects of sex and age on inter-individual variations in facial features. Second, we employed a landmark-based approach to quantify variations in face surfaces. This approach involves: (a) placing 56 landmarks (forehead, nose, lips, jaw-line, cheekbones, and eyes) on a surface representation of the MRI-based group average; (b) warping the landmarks to the individual faces using the inverse nonlinear transformation estimated for each person; and (3) using a principal components analysis (PCA) of the warped landmarks to identify facial features (i.e. clusters of landmarks) that vary in our sample in a correlated fashion. As with the voxel-wise analysis of the deformation fields, we examined the effects of sex and age on the PCA-derived spatial relationships between facial features. Both methods demonstrated significant sexual dimorphism in craniofacial structure in areas such as the chin, mandible, lips, and nose.

Show MeSH

Related in: MedlinePlus

Population averages at each iteration in the hierarchical model                                building process.For each step in the model-building process, axial (top row) and                                sagittal (bottom row) views are shown. From left to right: The                                9-parameter linear, 12-parameter linear, and each of the 6 nonlinear                                models (from each step outlined in Table 1). Note the improved                                contrast and structural resolution at each step in the model                                building process.
© Copyright Policy
Related In: Results  -  Collection


getmorefigures.php?uid=PMC3105012&req=5

pone-0020241-g001: Population averages at each iteration in the hierarchical model building process.For each step in the model-building process, axial (top row) and sagittal (bottom row) views are shown. From left to right: The 9-parameter linear, 12-parameter linear, and each of the 6 nonlinear models (from each step outlined in Table 1). Note the improved contrast and structural resolution at each step in the model building process.

Mentions: In order to estimate differences in shape between faces within the population, a group-wise nonlinear average of the craniofacial features was estimated using methods similar to those used in the deformation-based analysis of brain anatomy in humans [32] and animals [33], [34], [35], [36]. All scans were first corrected for intensity inhomogeneity using the N3 algorithm [37]. To initialize the model building process, a single T1-weighted MRI was randomly chosen from the sample to be the target for all other image volumes. All other MRI volumes were then rigidly rotated and translated (3 rotations and 3 translations) to match this initial target. The brain was then extracted using the “Brain Extraction Tool” [38], leaving only craniofacial information in each of the images. The remaining data includes skull (including teeth) and soft tissue (skin, muscle, and subcutaneous fat), thereby allowing for the analysis of craniofacial features with respect to the composite of tissue types from which the features are created. We estimate nonlinear transformations based on local intensity information. This method should mitigate the inclusion of such information as the teeth. As a result of the brain extraction, the following linear and nonlinear registration steps are driven only by intensity information in craniofacial structures. All possible pair-wise 9-parameter transformations (3 rotations, 3 translations, and 3 scales; 596 transformations for each of the 597 participants) were estimated and an average linear transformation was calculated for each image, thus effectively scaling each individual scan to the average head and face size of the population. After applying the average transformation, scans were averaged and the original scans were registered to this model using a 12-parameter transformation (3 rotations, 3 translations, 3 scales, and 3 shears); a new population-based average was estimated at this point. This model represents the population model accounting for all linear differences in head size. A multi-generation, multi-resolution fitting strategy was then initialized where each head was nonlinearly registered to the 12-parameter population atlas and another population-based average was estimated at this point. The group-wise atlas is generated in this iterative fashion, where all heads are nonlinearly registered to the atlas of the previous nonlinear registration using nonlinear transformations of increasing resolution at each iteration. The resulting transformations map the craniofacial structure of each individual to the nonlinear average of the entire group and can be analyzed explicitly to determine local variations in shape. Linear [39] and nonlinear [40] transformations were estimated using the mni_autoreg package available as part of the MINC toolbox (http://packages.bic.mni.mcgill.ca/). Nonlinear transformations were estimated using the previously optimized version of the ANIMAL algorithm [41]. Table 2 contains the parameters used at each stage of the nonlinear model-building process. Figure 1 demonstrates the results of the population averaging at each iteration in the model-building process.


Automated analysis of craniofacial morphology using magnetic resonance images.

Chakravarty MM, Aleong R, Leonard G, Perron M, Pike GB, Richer L, Veillette S, Pausova Z, Paus T - PLoS ONE (2011)

Population averages at each iteration in the hierarchical model                                building process.For each step in the model-building process, axial (top row) and                                sagittal (bottom row) views are shown. From left to right: The                                9-parameter linear, 12-parameter linear, and each of the 6 nonlinear                                models (from each step outlined in Table 1). Note the improved                                contrast and structural resolution at each step in the model                                building process.
© Copyright Policy
Related In: Results  -  Collection

Show All Figures
getmorefigures.php?uid=PMC3105012&req=5

pone-0020241-g001: Population averages at each iteration in the hierarchical model building process.For each step in the model-building process, axial (top row) and sagittal (bottom row) views are shown. From left to right: The 9-parameter linear, 12-parameter linear, and each of the 6 nonlinear models (from each step outlined in Table 1). Note the improved contrast and structural resolution at each step in the model building process.
Mentions: In order to estimate differences in shape between faces within the population, a group-wise nonlinear average of the craniofacial features was estimated using methods similar to those used in the deformation-based analysis of brain anatomy in humans [32] and animals [33], [34], [35], [36]. All scans were first corrected for intensity inhomogeneity using the N3 algorithm [37]. To initialize the model building process, a single T1-weighted MRI was randomly chosen from the sample to be the target for all other image volumes. All other MRI volumes were then rigidly rotated and translated (3 rotations and 3 translations) to match this initial target. The brain was then extracted using the “Brain Extraction Tool” [38], leaving only craniofacial information in each of the images. The remaining data includes skull (including teeth) and soft tissue (skin, muscle, and subcutaneous fat), thereby allowing for the analysis of craniofacial features with respect to the composite of tissue types from which the features are created. We estimate nonlinear transformations based on local intensity information. This method should mitigate the inclusion of such information as the teeth. As a result of the brain extraction, the following linear and nonlinear registration steps are driven only by intensity information in craniofacial structures. All possible pair-wise 9-parameter transformations (3 rotations, 3 translations, and 3 scales; 596 transformations for each of the 597 participants) were estimated and an average linear transformation was calculated for each image, thus effectively scaling each individual scan to the average head and face size of the population. After applying the average transformation, scans were averaged and the original scans were registered to this model using a 12-parameter transformation (3 rotations, 3 translations, 3 scales, and 3 shears); a new population-based average was estimated at this point. This model represents the population model accounting for all linear differences in head size. A multi-generation, multi-resolution fitting strategy was then initialized where each head was nonlinearly registered to the 12-parameter population atlas and another population-based average was estimated at this point. The group-wise atlas is generated in this iterative fashion, where all heads are nonlinearly registered to the atlas of the previous nonlinear registration using nonlinear transformations of increasing resolution at each iteration. The resulting transformations map the craniofacial structure of each individual to the nonlinear average of the entire group and can be analyzed explicitly to determine local variations in shape. Linear [39] and nonlinear [40] transformations were estimated using the mni_autoreg package available as part of the MINC toolbox (http://packages.bic.mni.mcgill.ca/). Nonlinear transformations were estimated using the previously optimized version of the ANIMAL algorithm [41]. Table 2 contains the parameters used at each stage of the nonlinear model-building process. Figure 1 demonstrates the results of the population averaging at each iteration in the model-building process.

Bottom Line: Using voxel-wise measures of expansion and contraction, we then examined the effects of sex and age on inter-individual variations in facial features.As with the voxel-wise analysis of the deformation fields, we examined the effects of sex and age on the PCA-derived spatial relationships between facial features.Both methods demonstrated significant sexual dimorphism in craniofacial structure in areas such as the chin, mandible, lips, and nose.

View Article: PubMed Central - PubMed

Affiliation: Rotman Research Institute, Baycrest, Toronto, Ontario, Canada. mchakravarty@rotman-baycrest.on.ca

ABSTRACT
Quantitative analysis of craniofacial morphology is of interest to scholars working in a wide variety of disciplines, such as anthropology, developmental biology, and medicine. T1-weighted (anatomical) magnetic resonance images (MRI) provide excellent contrast between soft tissues. Given its three-dimensional nature, MRI represents an ideal imaging modality for the analysis of craniofacial structure in living individuals. Here we describe how T1-weighted MR images, acquired to examine brain anatomy, can also be used to analyze facial features. Using a sample of typically developing adolescents from the Saguenay Youth Study (N = 597; 292 male, 305 female, ages: 12 to 18 years), we quantified inter-individual variations in craniofacial structure in two ways. First, we adapted existing nonlinear registration-based morphological techniques to generate iteratively a group-wise population average of craniofacial features. The nonlinear transformations were used to map the craniofacial structure of each individual to the population average. Using voxel-wise measures of expansion and contraction, we then examined the effects of sex and age on inter-individual variations in facial features. Second, we employed a landmark-based approach to quantify variations in face surfaces. This approach involves: (a) placing 56 landmarks (forehead, nose, lips, jaw-line, cheekbones, and eyes) on a surface representation of the MRI-based group average; (b) warping the landmarks to the individual faces using the inverse nonlinear transformation estimated for each person; and (3) using a principal components analysis (PCA) of the warped landmarks to identify facial features (i.e. clusters of landmarks) that vary in our sample in a correlated fashion. As with the voxel-wise analysis of the deformation fields, we examined the effects of sex and age on the PCA-derived spatial relationships between facial features. Both methods demonstrated significant sexual dimorphism in craniofacial structure in areas such as the chin, mandible, lips, and nose.

Show MeSH
Related in: MedlinePlus