Limits...
Multi-scale Gaussian representation and outline-learning based cell image segmentation.

Farhan M, Ruusuvuori P, Emmenlauer M, Rämö P, Dehio C, Yli-Harja O - BMC Bioinformatics (2013)

Bottom Line: Image segmentation is typically at the forefront of such analysis as the performance of the subsequent steps, for example, cell classification, cell tracking etc., often relies on the results of segmentation.The feature selection and classification framework for outline detection produces very simple sparse models which use only a small subset of the large, generic feature set, that is, only 7 and 5 features for the two cases.Finally, the results obtained for diverse datasets demonstrate that our framework not only produces accurate segmentation but also generalizes well to different segmentation tasks.

View Article: PubMed Central - HTML - PubMed

ABSTRACT

Background: High-throughput genome-wide screening to study gene-specific functions, e.g. for drug discovery, demands fast automated image analysis methods to assist in unraveling the full potential of such studies. Image segmentation is typically at the forefront of such analysis as the performance of the subsequent steps, for example, cell classification, cell tracking etc., often relies on the results of segmentation.

Methods: We present a cell cytoplasm segmentation framework which first separates cell cytoplasm from image background using novel approach of image enhancement and coefficient of variation of multi-scale Gaussian scale-space representation. A novel outline-learning based classification method is developed using regularized logistic regression with embedded feature selection which classifies image pixels as outline/non-outline to give cytoplasm outlines. Refinement of the detected outlines to separate cells from each other is performed in a post-processing step where the nuclei segmentation is used as contextual information.

Results and conclusions: We evaluate the proposed segmentation methodology using two challenging test cases, presenting images with completely different characteristics, with cells of varying size, shape, texture and degrees of overlap. The feature selection and classification framework for outline detection produces very simple sparse models which use only a small subset of the large, generic feature set, that is, only 7 and 5 features for the two cases. Quantitative comparison of the results for the two test cases against state-of-the-art methods show that our methodology outperforms them with an increase of 4-9% in segmentation accuracy with maximum accuracy of 93%. Finally, the results obtained for diverse datasets demonstrate that our framework not only produces accurate segmentation but also generalizes well to different segmentation tasks.

Show MeSH

Related in: MedlinePlus

Cell cytoplasm segmentation for Test Case II. Cell cytoplasm segmentation for Test Case II. (a)-(b) Two merged cytoplasm (Red)/nuclei (Blue) channel images, (c)-(d) benchmark segmentation, (e)-(f) nuclei segmentation from [6] and (g)-(h) the results of proposed segmentation. The size of the images is 450×450 pixels.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC3750482&req=5

Figure 6: Cell cytoplasm segmentation for Test Case II. Cell cytoplasm segmentation for Test Case II. (a)-(b) Two merged cytoplasm (Red)/nuclei (Blue) channel images, (c)-(d) benchmark segmentation, (e)-(f) nuclei segmentation from [6] and (g)-(h) the results of proposed segmentation. The size of the images is 450×450 pixels.

Mentions: The same images of Test Case II were used for performance evaluation of the cell nuclei and cytoplasm joint segmentation presented in [16]. Comparing the given values of TP, FP, and FN with our obtained values for cytoplasm segmentation, it can be said that we got similar or slightly improved results. However, it is difficult to say whether the difference has any significance. Moreover, the FM value from our method for nuclei detection is 0.95 as compared to the FM value of 0.80 reported in [16]. This suggests that our method outperforms a recently proposed method which was also reported to be computationally quite expensive. Figure 6 shows the results of the proposed method for two images from Test Case II.


Multi-scale Gaussian representation and outline-learning based cell image segmentation.

Farhan M, Ruusuvuori P, Emmenlauer M, Rämö P, Dehio C, Yli-Harja O - BMC Bioinformatics (2013)

Cell cytoplasm segmentation for Test Case II. Cell cytoplasm segmentation for Test Case II. (a)-(b) Two merged cytoplasm (Red)/nuclei (Blue) channel images, (c)-(d) benchmark segmentation, (e)-(f) nuclei segmentation from [6] and (g)-(h) the results of proposed segmentation. The size of the images is 450×450 pixels.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC3750482&req=5

Figure 6: Cell cytoplasm segmentation for Test Case II. Cell cytoplasm segmentation for Test Case II. (a)-(b) Two merged cytoplasm (Red)/nuclei (Blue) channel images, (c)-(d) benchmark segmentation, (e)-(f) nuclei segmentation from [6] and (g)-(h) the results of proposed segmentation. The size of the images is 450×450 pixels.
Mentions: The same images of Test Case II were used for performance evaluation of the cell nuclei and cytoplasm joint segmentation presented in [16]. Comparing the given values of TP, FP, and FN with our obtained values for cytoplasm segmentation, it can be said that we got similar or slightly improved results. However, it is difficult to say whether the difference has any significance. Moreover, the FM value from our method for nuclei detection is 0.95 as compared to the FM value of 0.80 reported in [16]. This suggests that our method outperforms a recently proposed method which was also reported to be computationally quite expensive. Figure 6 shows the results of the proposed method for two images from Test Case II.

Bottom Line: Image segmentation is typically at the forefront of such analysis as the performance of the subsequent steps, for example, cell classification, cell tracking etc., often relies on the results of segmentation.The feature selection and classification framework for outline detection produces very simple sparse models which use only a small subset of the large, generic feature set, that is, only 7 and 5 features for the two cases.Finally, the results obtained for diverse datasets demonstrate that our framework not only produces accurate segmentation but also generalizes well to different segmentation tasks.

View Article: PubMed Central - HTML - PubMed

ABSTRACT

Background: High-throughput genome-wide screening to study gene-specific functions, e.g. for drug discovery, demands fast automated image analysis methods to assist in unraveling the full potential of such studies. Image segmentation is typically at the forefront of such analysis as the performance of the subsequent steps, for example, cell classification, cell tracking etc., often relies on the results of segmentation.

Methods: We present a cell cytoplasm segmentation framework which first separates cell cytoplasm from image background using novel approach of image enhancement and coefficient of variation of multi-scale Gaussian scale-space representation. A novel outline-learning based classification method is developed using regularized logistic regression with embedded feature selection which classifies image pixels as outline/non-outline to give cytoplasm outlines. Refinement of the detected outlines to separate cells from each other is performed in a post-processing step where the nuclei segmentation is used as contextual information.

Results and conclusions: We evaluate the proposed segmentation methodology using two challenging test cases, presenting images with completely different characteristics, with cells of varying size, shape, texture and degrees of overlap. The feature selection and classification framework for outline detection produces very simple sparse models which use only a small subset of the large, generic feature set, that is, only 7 and 5 features for the two cases. Quantitative comparison of the results for the two test cases against state-of-the-art methods show that our methodology outperforms them with an increase of 4-9% in segmentation accuracy with maximum accuracy of 93%. Finally, the results obtained for diverse datasets demonstrate that our framework not only produces accurate segmentation but also generalizes well to different segmentation tasks.

Show MeSH
Related in: MedlinePlus