Limits...
The filament sensor for near real-time detection of cytoskeletal fiber structures.

Eltzner B, Wollnik C, Gottschlich C, Huckemann S, Rehfeldt F - PLoS ONE (2015)

Bottom Line: Further, we provide a benchmark dataset of real cell images along with filaments manually marked by a human expert as well as simulated benchmark images.The FS clearly outperforms existing methods in terms of computational runtime and filament extraction accuracy.The implementation of the FS and the benchmark database are available as open source.

View Article: PubMed Central - PubMed

Affiliation: Institute for Mathematical Stochastics, Georg-August-University, 37077 Göttingen, Germany.

ABSTRACT
A reliable extraction of filament data from microscopic images is of high interest in the analysis of acto-myosin structures as early morphological markers in mechanically guided differentiation of human mesenchymal stem cells and the understanding of the underlying fiber arrangement processes. In this paper, we propose the filament sensor (FS), a fast and robust processing sequence which detects and records location, orientation, length, and width for each single filament of an image, and thus allows for the above described analysis. The extraction of these features has previously not been possible with existing methods. We evaluate the performance of the proposed FS in terms of accuracy and speed in comparison to three existing methods with respect to their limited output. Further, we provide a benchmark dataset of real cell images along with filaments manually marked by a human expert as well as simulated benchmark images. The FS clearly outperforms existing methods in terms of computational runtime and filament extraction accuracy. The implementation of the FS and the benchmark database are available as open source.

No MeSH data available.


Performance comparison in the presence of blur.Showing cell VB2. Green pixels are false positives detected by the method, yellow are correctly identified pixels and red are missed pixels as in Fig 9. All methods produce some false positives but the eLoG method stands out by detecting almost all cell pixels as line pixels. CID produces a cobweb structure with a similar amount of oversegmentation as the FS.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4440737&req=5

pone.0126346.g013: Performance comparison in the presence of blur.Showing cell VB2. Green pixels are false positives detected by the method, yellow are correctly identified pixels and red are missed pixels as in Fig 9. All methods produce some false positives but the eLoG method stands out by detecting almost all cell pixels as line pixels. CID produces a cobweb structure with a similar amount of oversegmentation as the FS.

Mentions: In the presence of blur, e.g. if the image is slightly off-focus, cf. Fig 13 showing cell VB2, the FS identifies 55.74% of labeled filament pixels with an oversegmentation of 75.83% of labeled filament pixels. The CID finds 48.65% of labeled filament pixels with an oversegmentation rate of 58.02%. Of course, it cannot give orientation information. Notably the orientation due to oversegmentation of the FS is compatible with the ground truth orientation labeling. The eLoG method dramatically oversegments, rendering its result useless for further analysis. This cell VB2 is one of the outliers both for the eLoG and Hough methods, cf. Figs 14 and 15.


The filament sensor for near real-time detection of cytoskeletal fiber structures.

Eltzner B, Wollnik C, Gottschlich C, Huckemann S, Rehfeldt F - PLoS ONE (2015)

Performance comparison in the presence of blur.Showing cell VB2. Green pixels are false positives detected by the method, yellow are correctly identified pixels and red are missed pixels as in Fig 9. All methods produce some false positives but the eLoG method stands out by detecting almost all cell pixels as line pixels. CID produces a cobweb structure with a similar amount of oversegmentation as the FS.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4440737&req=5

pone.0126346.g013: Performance comparison in the presence of blur.Showing cell VB2. Green pixels are false positives detected by the method, yellow are correctly identified pixels and red are missed pixels as in Fig 9. All methods produce some false positives but the eLoG method stands out by detecting almost all cell pixels as line pixels. CID produces a cobweb structure with a similar amount of oversegmentation as the FS.
Mentions: In the presence of blur, e.g. if the image is slightly off-focus, cf. Fig 13 showing cell VB2, the FS identifies 55.74% of labeled filament pixels with an oversegmentation of 75.83% of labeled filament pixels. The CID finds 48.65% of labeled filament pixels with an oversegmentation rate of 58.02%. Of course, it cannot give orientation information. Notably the orientation due to oversegmentation of the FS is compatible with the ground truth orientation labeling. The eLoG method dramatically oversegments, rendering its result useless for further analysis. This cell VB2 is one of the outliers both for the eLoG and Hough methods, cf. Figs 14 and 15.

Bottom Line: Further, we provide a benchmark dataset of real cell images along with filaments manually marked by a human expert as well as simulated benchmark images.The FS clearly outperforms existing methods in terms of computational runtime and filament extraction accuracy.The implementation of the FS and the benchmark database are available as open source.

View Article: PubMed Central - PubMed

Affiliation: Institute for Mathematical Stochastics, Georg-August-University, 37077 Göttingen, Germany.

ABSTRACT
A reliable extraction of filament data from microscopic images is of high interest in the analysis of acto-myosin structures as early morphological markers in mechanically guided differentiation of human mesenchymal stem cells and the understanding of the underlying fiber arrangement processes. In this paper, we propose the filament sensor (FS), a fast and robust processing sequence which detects and records location, orientation, length, and width for each single filament of an image, and thus allows for the above described analysis. The extraction of these features has previously not been possible with existing methods. We evaluate the performance of the proposed FS in terms of accuracy and speed in comparison to three existing methods with respect to their limited output. Further, we provide a benchmark dataset of real cell images along with filaments manually marked by a human expert as well as simulated benchmark images. The FS clearly outperforms existing methods in terms of computational runtime and filament extraction accuracy. The implementation of the FS and the benchmark database are available as open source.

No MeSH data available.