Limits...
The filament sensor for near real-time detection of cytoskeletal fiber structures.

Eltzner B, Wollnik C, Gottschlich C, Huckemann S, Rehfeldt F - PLoS ONE (2015)

Bottom Line: Further, we provide a benchmark dataset of real cell images along with filaments manually marked by a human expert as well as simulated benchmark images.The FS clearly outperforms existing methods in terms of computational runtime and filament extraction accuracy.The implementation of the FS and the benchmark database are available as open source.

View Article: PubMed Central - PubMed

Affiliation: Institute for Mathematical Stochastics, Georg-August-University, 37077 Göttingen, Germany.

ABSTRACT
A reliable extraction of filament data from microscopic images is of high interest in the analysis of acto-myosin structures as early morphological markers in mechanically guided differentiation of human mesenchymal stem cells and the understanding of the underlying fiber arrangement processes. In this paper, we propose the filament sensor (FS), a fast and robust processing sequence which detects and records location, orientation, length, and width for each single filament of an image, and thus allows for the above described analysis. The extraction of these features has previously not been possible with existing methods. We evaluate the performance of the proposed FS in terms of accuracy and speed in comparison to three existing methods with respect to their limited output. Further, we provide a benchmark dataset of real cell images along with filaments manually marked by a human expert as well as simulated benchmark images. The FS clearly outperforms existing methods in terms of computational runtime and filament extraction accuracy. The implementation of the FS and the benchmark database are available as open source.

No MeSH data available.


Some circle masks.These are examples of the circular masks used by the segment sensor algorithm to determine line width. The circles displayed here correspond to diameters of 2, 4, 6 and 8 pixels. The masks are squares with an odd number of pixels as they are centered at a unique pixel.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4440737&req=5

pone.0126346.g006: Some circle masks.These are examples of the circular masks used by the segment sensor algorithm to determine line width. The circles displayed here correspond to diameters of 2, 4, 6 and 8 pixels. The masks are squares with an odd number of pixels as they are centered at a unique pixel.

Mentions: Every white pixel B(x, y) ∈ B+ is assigned a width, W(x, y) which yields a width map W. This is done by taking circular neighborhoods of the pixel (cf. Fig 6) with increasing diameter. A diameter is accepted, if the ratio of white pixels of the binary image is above an adjustable tolerance (with default value 95%). If a diameter was accepted, the next larger diameter is tested until a diameter is rejected. The width W(x, y) at the pixel is then given by the largest accepted diameter at the pixel. In particular, this gives a range of widths 1 ≥ w1 < … < wk = maxW(x, y) attained by pixels in W and it extends the binary image B to a nested sequence of binary images Bj with white pixels , j = 1, …, k. A temporary List L, the filament data set 𝓕, and the orientation field 𝓞 are each initialized by the empty set.


The filament sensor for near real-time detection of cytoskeletal fiber structures.

Eltzner B, Wollnik C, Gottschlich C, Huckemann S, Rehfeldt F - PLoS ONE (2015)

Some circle masks.These are examples of the circular masks used by the segment sensor algorithm to determine line width. The circles displayed here correspond to diameters of 2, 4, 6 and 8 pixels. The masks are squares with an odd number of pixels as they are centered at a unique pixel.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4440737&req=5

pone.0126346.g006: Some circle masks.These are examples of the circular masks used by the segment sensor algorithm to determine line width. The circles displayed here correspond to diameters of 2, 4, 6 and 8 pixels. The masks are squares with an odd number of pixels as they are centered at a unique pixel.
Mentions: Every white pixel B(x, y) ∈ B+ is assigned a width, W(x, y) which yields a width map W. This is done by taking circular neighborhoods of the pixel (cf. Fig 6) with increasing diameter. A diameter is accepted, if the ratio of white pixels of the binary image is above an adjustable tolerance (with default value 95%). If a diameter was accepted, the next larger diameter is tested until a diameter is rejected. The width W(x, y) at the pixel is then given by the largest accepted diameter at the pixel. In particular, this gives a range of widths 1 ≥ w1 < … < wk = maxW(x, y) attained by pixels in W and it extends the binary image B to a nested sequence of binary images Bj with white pixels , j = 1, …, k. A temporary List L, the filament data set 𝓕, and the orientation field 𝓞 are each initialized by the empty set.

Bottom Line: Further, we provide a benchmark dataset of real cell images along with filaments manually marked by a human expert as well as simulated benchmark images.The FS clearly outperforms existing methods in terms of computational runtime and filament extraction accuracy.The implementation of the FS and the benchmark database are available as open source.

View Article: PubMed Central - PubMed

Affiliation: Institute for Mathematical Stochastics, Georg-August-University, 37077 Göttingen, Germany.

ABSTRACT
A reliable extraction of filament data from microscopic images is of high interest in the analysis of acto-myosin structures as early morphological markers in mechanically guided differentiation of human mesenchymal stem cells and the understanding of the underlying fiber arrangement processes. In this paper, we propose the filament sensor (FS), a fast and robust processing sequence which detects and records location, orientation, length, and width for each single filament of an image, and thus allows for the above described analysis. The extraction of these features has previously not been possible with existing methods. We evaluate the performance of the proposed FS in terms of accuracy and speed in comparison to three existing methods with respect to their limited output. Further, we provide a benchmark dataset of real cell images along with filaments manually marked by a human expert as well as simulated benchmark images. The FS clearly outperforms existing methods in terms of computational runtime and filament extraction accuracy. The implementation of the FS and the benchmark database are available as open source.

No MeSH data available.