Limits...
The filament sensor for near real-time detection of cytoskeletal fiber structures.

Eltzner B, Wollnik C, Gottschlich C, Huckemann S, Rehfeldt F - PLoS ONE (2015)

Bottom Line: Further, we provide a benchmark dataset of real cell images along with filaments manually marked by a human expert as well as simulated benchmark images.The FS clearly outperforms existing methods in terms of computational runtime and filament extraction accuracy.The implementation of the FS and the benchmark database are available as open source.

View Article: PubMed Central - PubMed

Affiliation: Institute for Mathematical Stochastics, Georg-August-University, 37077 Göttingen, Germany.

ABSTRACT
A reliable extraction of filament data from microscopic images is of high interest in the analysis of acto-myosin structures as early morphological markers in mechanically guided differentiation of human mesenchymal stem cells and the understanding of the underlying fiber arrangement processes. In this paper, we propose the filament sensor (FS), a fast and robust processing sequence which detects and records location, orientation, length, and width for each single filament of an image, and thus allows for the above described analysis. The extraction of these features has previously not been possible with existing methods. We evaluate the performance of the proposed FS in terms of accuracy and speed in comparison to three existing methods with respect to their limited output. Further, we provide a benchmark dataset of real cell images along with filaments manually marked by a human expert as well as simulated benchmark images. The FS clearly outperforms existing methods in terms of computational runtime and filament extraction accuracy. The implementation of the FS and the benchmark database are available as open source.

No MeSH data available.


Performance comparison in the presence of noise.Showing cell B2. Green pixels are false positives detected by the method, yellow are correctly identified pixels and red are missed pixels as in Fig 9. For this image the FS fares much better than both other methods. The eLoG method as well as CID both find a large amount of spurious features, where the eLoG method detects large contiguous areas and CID produces a cobweb structure, covering nearly the whole cell. Especially in the left, lower, and central parts of the image the FS is the only method that does not detect a large amount of spurious line pixels.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4440737&req=5

pone.0126346.g016: Performance comparison in the presence of noise.Showing cell B2. Green pixels are false positives detected by the method, yellow are correctly identified pixels and red are missed pixels as in Fig 9. For this image the FS fares much better than both other methods. The eLoG method as well as CID both find a large amount of spurious features, where the eLoG method detects large contiguous areas and CID produces a cobweb structure, covering nearly the whole cell. Especially in the left, lower, and central parts of the image the FS is the only method that does not detect a large amount of spurious line pixels.

Mentions: Under noise as in Fig 16 showing cell B2, the FS identifies 61.40% of labeled filament pixels with oversegmentation rate of 45% while CID finds 64.73% with an oversegmentation rate of 71.75%. Again due to heavy oversegmentation, the eLoG method’s results cannot be used for further analysis.


The filament sensor for near real-time detection of cytoskeletal fiber structures.

Eltzner B, Wollnik C, Gottschlich C, Huckemann S, Rehfeldt F - PLoS ONE (2015)

Performance comparison in the presence of noise.Showing cell B2. Green pixels are false positives detected by the method, yellow are correctly identified pixels and red are missed pixels as in Fig 9. For this image the FS fares much better than both other methods. The eLoG method as well as CID both find a large amount of spurious features, where the eLoG method detects large contiguous areas and CID produces a cobweb structure, covering nearly the whole cell. Especially in the left, lower, and central parts of the image the FS is the only method that does not detect a large amount of spurious line pixels.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4440737&req=5

pone.0126346.g016: Performance comparison in the presence of noise.Showing cell B2. Green pixels are false positives detected by the method, yellow are correctly identified pixels and red are missed pixels as in Fig 9. For this image the FS fares much better than both other methods. The eLoG method as well as CID both find a large amount of spurious features, where the eLoG method detects large contiguous areas and CID produces a cobweb structure, covering nearly the whole cell. Especially in the left, lower, and central parts of the image the FS is the only method that does not detect a large amount of spurious line pixels.
Mentions: Under noise as in Fig 16 showing cell B2, the FS identifies 61.40% of labeled filament pixels with oversegmentation rate of 45% while CID finds 64.73% with an oversegmentation rate of 71.75%. Again due to heavy oversegmentation, the eLoG method’s results cannot be used for further analysis.

Bottom Line: Further, we provide a benchmark dataset of real cell images along with filaments manually marked by a human expert as well as simulated benchmark images.The FS clearly outperforms existing methods in terms of computational runtime and filament extraction accuracy.The implementation of the FS and the benchmark database are available as open source.

View Article: PubMed Central - PubMed

Affiliation: Institute for Mathematical Stochastics, Georg-August-University, 37077 Göttingen, Germany.

ABSTRACT
A reliable extraction of filament data from microscopic images is of high interest in the analysis of acto-myosin structures as early morphological markers in mechanically guided differentiation of human mesenchymal stem cells and the understanding of the underlying fiber arrangement processes. In this paper, we propose the filament sensor (FS), a fast and robust processing sequence which detects and records location, orientation, length, and width for each single filament of an image, and thus allows for the above described analysis. The extraction of these features has previously not been possible with existing methods. We evaluate the performance of the proposed FS in terms of accuracy and speed in comparison to three existing methods with respect to their limited output. Further, we provide a benchmark dataset of real cell images along with filaments manually marked by a human expert as well as simulated benchmark images. The FS clearly outperforms existing methods in terms of computational runtime and filament extraction accuracy. The implementation of the FS and the benchmark database are available as open source.

No MeSH data available.