Limits...
Multi-scale Gaussian representation and outline-learning based cell image segmentation.

Farhan M, Ruusuvuori P, Emmenlauer M, Rämö P, Dehio C, Yli-Harja O - BMC Bioinformatics (2013)

Bottom Line: Image segmentation is typically at the forefront of such analysis as the performance of the subsequent steps, for example, cell classification, cell tracking etc., often relies on the results of segmentation.The feature selection and classification framework for outline detection produces very simple sparse models which use only a small subset of the large, generic feature set, that is, only 7 and 5 features for the two cases.Finally, the results obtained for diverse datasets demonstrate that our framework not only produces accurate segmentation but also generalizes well to different segmentation tasks.

View Article: PubMed Central - HTML - PubMed

ABSTRACT

Background: High-throughput genome-wide screening to study gene-specific functions, e.g. for drug discovery, demands fast automated image analysis methods to assist in unraveling the full potential of such studies. Image segmentation is typically at the forefront of such analysis as the performance of the subsequent steps, for example, cell classification, cell tracking etc., often relies on the results of segmentation.

Methods: We present a cell cytoplasm segmentation framework which first separates cell cytoplasm from image background using novel approach of image enhancement and coefficient of variation of multi-scale Gaussian scale-space representation. A novel outline-learning based classification method is developed using regularized logistic regression with embedded feature selection which classifies image pixels as outline/non-outline to give cytoplasm outlines. Refinement of the detected outlines to separate cells from each other is performed in a post-processing step where the nuclei segmentation is used as contextual information.

Results and conclusions: We evaluate the proposed segmentation methodology using two challenging test cases, presenting images with completely different characteristics, with cells of varying size, shape, texture and degrees of overlap. The feature selection and classification framework for outline detection produces very simple sparse models which use only a small subset of the large, generic feature set, that is, only 7 and 5 features for the two cases. Quantitative comparison of the results for the two test cases against state-of-the-art methods show that our methodology outperforms them with an increase of 4-9% in segmentation accuracy with maximum accuracy of 93%. Finally, the results obtained for diverse datasets demonstrate that our framework not only produces accurate segmentation but also generalizes well to different segmentation tasks.

Show MeSH

Related in: MedlinePlus

Cell cytoplasm segmentation for Test Case I. Cell cytoplasm segmentation for Test Case I. (a) A merged cytoplasm (Red)/nuclei (Blue) channel image, (b) benchmark segmentation from biologists, (c) nuclei segmentation from [6] and (d) the result of proposed segmentation. The size of the image is 1040×1392 pixels.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC3750482&req=5

Figure 5: Cell cytoplasm segmentation for Test Case I. Cell cytoplasm segmentation for Test Case I. (a) A merged cytoplasm (Red)/nuclei (Blue) channel image, (b) benchmark segmentation from biologists, (c) nuclei segmentation from [6] and (d) the result of proposed segmentation. The size of the image is 1040×1392 pixels.

Mentions: In the light of the discussion in the previous paragraph, forced splitting for obtaining one cytoplasm per every detected nuclei did not seem beneficial. However, the FP for our cytoplasm segmentation was still found to be twice as much as for nuclei segmentation. The reason is that objects that do not get split into constituent cells were no longer able to correspond to even a single object in benchmarked image because of the constraint of FMth. Moreover, the consequence of avoiding forced splitting was an increased value for FN as some clumped cells did not get detected. A worth-mentioning point is that since the value of FP for our nuclei segmentation was low, forced splitting might still have resulted in a similar value of FP that we obtained without doing so, but that would have given a much lower value for FN. However, the main reason behind not using forced splitting was that we want to retain multinuclear cell phenotypes. The overall segmentation from the proposed method confirms that it outperforms the method from CP with a 9% increase in FM value. Another measure that we obtained is the mean value of FM for all the correctly detected cytoplasms and it was 0.85 for the proposed method against 0.81 for CP implementation. This also shows how well the cytoplasms correspond among the benchmarked images and our segmented images. Figure 5 presents the segmentation results from the proposed method for qualitative evaluation.


Multi-scale Gaussian representation and outline-learning based cell image segmentation.

Farhan M, Ruusuvuori P, Emmenlauer M, Rämö P, Dehio C, Yli-Harja O - BMC Bioinformatics (2013)

Cell cytoplasm segmentation for Test Case I. Cell cytoplasm segmentation for Test Case I. (a) A merged cytoplasm (Red)/nuclei (Blue) channel image, (b) benchmark segmentation from biologists, (c) nuclei segmentation from [6] and (d) the result of proposed segmentation. The size of the image is 1040×1392 pixels.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC3750482&req=5

Figure 5: Cell cytoplasm segmentation for Test Case I. Cell cytoplasm segmentation for Test Case I. (a) A merged cytoplasm (Red)/nuclei (Blue) channel image, (b) benchmark segmentation from biologists, (c) nuclei segmentation from [6] and (d) the result of proposed segmentation. The size of the image is 1040×1392 pixels.
Mentions: In the light of the discussion in the previous paragraph, forced splitting for obtaining one cytoplasm per every detected nuclei did not seem beneficial. However, the FP for our cytoplasm segmentation was still found to be twice as much as for nuclei segmentation. The reason is that objects that do not get split into constituent cells were no longer able to correspond to even a single object in benchmarked image because of the constraint of FMth. Moreover, the consequence of avoiding forced splitting was an increased value for FN as some clumped cells did not get detected. A worth-mentioning point is that since the value of FP for our nuclei segmentation was low, forced splitting might still have resulted in a similar value of FP that we obtained without doing so, but that would have given a much lower value for FN. However, the main reason behind not using forced splitting was that we want to retain multinuclear cell phenotypes. The overall segmentation from the proposed method confirms that it outperforms the method from CP with a 9% increase in FM value. Another measure that we obtained is the mean value of FM for all the correctly detected cytoplasms and it was 0.85 for the proposed method against 0.81 for CP implementation. This also shows how well the cytoplasms correspond among the benchmarked images and our segmented images. Figure 5 presents the segmentation results from the proposed method for qualitative evaluation.

Bottom Line: Image segmentation is typically at the forefront of such analysis as the performance of the subsequent steps, for example, cell classification, cell tracking etc., often relies on the results of segmentation.The feature selection and classification framework for outline detection produces very simple sparse models which use only a small subset of the large, generic feature set, that is, only 7 and 5 features for the two cases.Finally, the results obtained for diverse datasets demonstrate that our framework not only produces accurate segmentation but also generalizes well to different segmentation tasks.

View Article: PubMed Central - HTML - PubMed

ABSTRACT

Background: High-throughput genome-wide screening to study gene-specific functions, e.g. for drug discovery, demands fast automated image analysis methods to assist in unraveling the full potential of such studies. Image segmentation is typically at the forefront of such analysis as the performance of the subsequent steps, for example, cell classification, cell tracking etc., often relies on the results of segmentation.

Methods: We present a cell cytoplasm segmentation framework which first separates cell cytoplasm from image background using novel approach of image enhancement and coefficient of variation of multi-scale Gaussian scale-space representation. A novel outline-learning based classification method is developed using regularized logistic regression with embedded feature selection which classifies image pixels as outline/non-outline to give cytoplasm outlines. Refinement of the detected outlines to separate cells from each other is performed in a post-processing step where the nuclei segmentation is used as contextual information.

Results and conclusions: We evaluate the proposed segmentation methodology using two challenging test cases, presenting images with completely different characteristics, with cells of varying size, shape, texture and degrees of overlap. The feature selection and classification framework for outline detection produces very simple sparse models which use only a small subset of the large, generic feature set, that is, only 7 and 5 features for the two cases. Quantitative comparison of the results for the two test cases against state-of-the-art methods show that our methodology outperforms them with an increase of 4-9% in segmentation accuracy with maximum accuracy of 93%. Finally, the results obtained for diverse datasets demonstrate that our framework not only produces accurate segmentation but also generalizes well to different segmentation tasks.

Show MeSH
Related in: MedlinePlus