Limits...
Multi-scale Gaussian representation and outline-learning based cell image segmentation.

Farhan M, Ruusuvuori P, Emmenlauer M, Rämö P, Dehio C, Yli-Harja O - BMC Bioinformatics (2013)

Bottom Line: Image segmentation is typically at the forefront of such analysis as the performance of the subsequent steps, for example, cell classification, cell tracking etc., often relies on the results of segmentation.The feature selection and classification framework for outline detection produces very simple sparse models which use only a small subset of the large, generic feature set, that is, only 7 and 5 features for the two cases.Finally, the results obtained for diverse datasets demonstrate that our framework not only produces accurate segmentation but also generalizes well to different segmentation tasks.

View Article: PubMed Central - HTML - PubMed

ABSTRACT

Background: High-throughput genome-wide screening to study gene-specific functions, e.g. for drug discovery, demands fast automated image analysis methods to assist in unraveling the full potential of such studies. Image segmentation is typically at the forefront of such analysis as the performance of the subsequent steps, for example, cell classification, cell tracking etc., often relies on the results of segmentation.

Methods: We present a cell cytoplasm segmentation framework which first separates cell cytoplasm from image background using novel approach of image enhancement and coefficient of variation of multi-scale Gaussian scale-space representation. A novel outline-learning based classification method is developed using regularized logistic regression with embedded feature selection which classifies image pixels as outline/non-outline to give cytoplasm outlines. Refinement of the detected outlines to separate cells from each other is performed in a post-processing step where the nuclei segmentation is used as contextual information.

Results and conclusions: We evaluate the proposed segmentation methodology using two challenging test cases, presenting images with completely different characteristics, with cells of varying size, shape, texture and degrees of overlap. The feature selection and classification framework for outline detection produces very simple sparse models which use only a small subset of the large, generic feature set, that is, only 7 and 5 features for the two cases. Quantitative comparison of the results for the two test cases against state-of-the-art methods show that our methodology outperforms them with an increase of 4-9% in segmentation accuracy with maximum accuracy of 93%. Finally, the results obtained for diverse datasets demonstrate that our framework not only produces accurate segmentation but also generalizes well to different segmentation tasks.

Show MeSH

Related in: MedlinePlus

Visual representation of features used by classifiers. Visual representation of features used by classifiers. (a) A pre-processed image, (b) VAR3×3, (c) MIN7×7, (d) f1/4th05×5, (e) f1/4th3pi/45×5, (f) ASM5×5, (g) IMOC27×7, (h) IMOC29×9, (i) ENT5×5, (j) DOE7×7, (k) ASM9×9, and (l) outlines obtained from thresholding the output of classifier. The size of the images is 700×430 pixels.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC3750482&req=5

Figure 4: Visual representation of features used by classifiers. Visual representation of features used by classifiers. (a) A pre-processed image, (b) VAR3×3, (c) MIN7×7, (d) f1/4th05×5, (e) f1/4th3pi/45×5, (f) ASM5×5, (g) IMOC27×7, (h) IMOC29×9, (i) ENT5×5, (j) DOE7×7, (k) ASM9×9, and (l) outlines obtained from thresholding the output of classifier. The size of the images is 700×430 pixels.

Mentions: where f1 = ENT5×5 stands for entropy, f2 = DOE7×7 for differenceOfEntropy, f3 = IMOC27×7 and f4 = IMOC29×9 for informationMeasureOfCorrelation2 and f5 = ASM9×9 for angularSecondMoment, see [29] for details. Again, the subscript x × y stands for the respective kernel sizes. Then, for each of the test images, feature vector of size 1447680×7 for Test Case I and 160000×5 for Test Case II were calculated and input to the above models to get the class probabilities using Equation 4. The probabilities were thresholded with threshold value of 0.5 to get outline/non-outline pixels. Finally, post-processing step was performed to get the segmentation done. Figure 4 presents a visual representation of the features used by classifiers given in (8) and (9).


Multi-scale Gaussian representation and outline-learning based cell image segmentation.

Farhan M, Ruusuvuori P, Emmenlauer M, Rämö P, Dehio C, Yli-Harja O - BMC Bioinformatics (2013)

Visual representation of features used by classifiers. Visual representation of features used by classifiers. (a) A pre-processed image, (b) VAR3×3, (c) MIN7×7, (d) f1/4th05×5, (e) f1/4th3pi/45×5, (f) ASM5×5, (g) IMOC27×7, (h) IMOC29×9, (i) ENT5×5, (j) DOE7×7, (k) ASM9×9, and (l) outlines obtained from thresholding the output of classifier. The size of the images is 700×430 pixels.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC3750482&req=5

Figure 4: Visual representation of features used by classifiers. Visual representation of features used by classifiers. (a) A pre-processed image, (b) VAR3×3, (c) MIN7×7, (d) f1/4th05×5, (e) f1/4th3pi/45×5, (f) ASM5×5, (g) IMOC27×7, (h) IMOC29×9, (i) ENT5×5, (j) DOE7×7, (k) ASM9×9, and (l) outlines obtained from thresholding the output of classifier. The size of the images is 700×430 pixels.
Mentions: where f1 = ENT5×5 stands for entropy, f2 = DOE7×7 for differenceOfEntropy, f3 = IMOC27×7 and f4 = IMOC29×9 for informationMeasureOfCorrelation2 and f5 = ASM9×9 for angularSecondMoment, see [29] for details. Again, the subscript x × y stands for the respective kernel sizes. Then, for each of the test images, feature vector of size 1447680×7 for Test Case I and 160000×5 for Test Case II were calculated and input to the above models to get the class probabilities using Equation 4. The probabilities were thresholded with threshold value of 0.5 to get outline/non-outline pixels. Finally, post-processing step was performed to get the segmentation done. Figure 4 presents a visual representation of the features used by classifiers given in (8) and (9).

Bottom Line: Image segmentation is typically at the forefront of such analysis as the performance of the subsequent steps, for example, cell classification, cell tracking etc., often relies on the results of segmentation.The feature selection and classification framework for outline detection produces very simple sparse models which use only a small subset of the large, generic feature set, that is, only 7 and 5 features for the two cases.Finally, the results obtained for diverse datasets demonstrate that our framework not only produces accurate segmentation but also generalizes well to different segmentation tasks.

View Article: PubMed Central - HTML - PubMed

ABSTRACT

Background: High-throughput genome-wide screening to study gene-specific functions, e.g. for drug discovery, demands fast automated image analysis methods to assist in unraveling the full potential of such studies. Image segmentation is typically at the forefront of such analysis as the performance of the subsequent steps, for example, cell classification, cell tracking etc., often relies on the results of segmentation.

Methods: We present a cell cytoplasm segmentation framework which first separates cell cytoplasm from image background using novel approach of image enhancement and coefficient of variation of multi-scale Gaussian scale-space representation. A novel outline-learning based classification method is developed using regularized logistic regression with embedded feature selection which classifies image pixels as outline/non-outline to give cytoplasm outlines. Refinement of the detected outlines to separate cells from each other is performed in a post-processing step where the nuclei segmentation is used as contextual information.

Results and conclusions: We evaluate the proposed segmentation methodology using two challenging test cases, presenting images with completely different characteristics, with cells of varying size, shape, texture and degrees of overlap. The feature selection and classification framework for outline detection produces very simple sparse models which use only a small subset of the large, generic feature set, that is, only 7 and 5 features for the two cases. Quantitative comparison of the results for the two test cases against state-of-the-art methods show that our methodology outperforms them with an increase of 4-9% in segmentation accuracy with maximum accuracy of 93%. Finally, the results obtained for diverse datasets demonstrate that our framework not only produces accurate segmentation but also generalizes well to different segmentation tasks.

Show MeSH
Related in: MedlinePlus