Limits...
The filament sensor for near real-time detection of cytoskeletal fiber structures.

Eltzner B, Wollnik C, Gottschlich C, Huckemann S, Rehfeldt F - PLoS ONE (2015)

Bottom Line: Further, we provide a benchmark dataset of real cell images along with filaments manually marked by a human expert as well as simulated benchmark images.The FS clearly outperforms existing methods in terms of computational runtime and filament extraction accuracy.The implementation of the FS and the benchmark database are available as open source.

View Article: PubMed Central - PubMed

Affiliation: Institute for Mathematical Stochastics, Georg-August-University, 37077 Göttingen, Germany.

ABSTRACT
A reliable extraction of filament data from microscopic images is of high interest in the analysis of acto-myosin structures as early morphological markers in mechanically guided differentiation of human mesenchymal stem cells and the understanding of the underlying fiber arrangement processes. In this paper, we propose the filament sensor (FS), a fast and robust processing sequence which detects and records location, orientation, length, and width for each single filament of an image, and thus allows for the above described analysis. The extraction of these features has previously not been possible with existing methods. We evaluate the performance of the proposed FS in terms of accuracy and speed in comparison to three existing methods with respect to their limited output. Further, we provide a benchmark dataset of real cell images along with filaments manually marked by a human expert as well as simulated benchmark images. The FS clearly outperforms existing methods in terms of computational runtime and filament extraction accuracy. The implementation of the FS and the benchmark database are available as open source.

No MeSH data available.


Illustration of the segment sensor’s probing paths.Some connectivity lines (left) with detail (right) along which the segment sensor probes. The lines illustrated correspond to multiples of 10° to render the illustration legible. The algorithm uses multiples of 1°, which implies ten times as many paths. The detail shows that the lines are thin in the sense that pixels on diagonal lines touch only at the corner points.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4440737&req=5

pone.0126346.g007: Illustration of the segment sensor’s probing paths.Some connectivity lines (left) with detail (right) along which the segment sensor probes. The lines illustrated correspond to multiples of 10° to render the illustration legible. The algorithm uses multiples of 1°, which implies ten times as many paths. The detail shows that the lines are thin in the sense that pixels on diagonal lines touch only at the corner points.

Mentions: For each pixel the segment sensor probes into a number of directions in Bj (1°, …, 360° by default; this corresponds to 180 orientations). For each direction it determines the maximal length at which pixels can be found, connected by a straight line to Bj(x, y) in as illustrated in Fig 7. The pixel data of the largest line segment acquired as the union of the lines of two opposing directions is stored to L, if its length exceeds an adjustable threshold of minimal filament length (20 pixels by default). These pixel data only include the centerline pixels found in .


The filament sensor for near real-time detection of cytoskeletal fiber structures.

Eltzner B, Wollnik C, Gottschlich C, Huckemann S, Rehfeldt F - PLoS ONE (2015)

Illustration of the segment sensor’s probing paths.Some connectivity lines (left) with detail (right) along which the segment sensor probes. The lines illustrated correspond to multiples of 10° to render the illustration legible. The algorithm uses multiples of 1°, which implies ten times as many paths. The detail shows that the lines are thin in the sense that pixels on diagonal lines touch only at the corner points.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4440737&req=5

pone.0126346.g007: Illustration of the segment sensor’s probing paths.Some connectivity lines (left) with detail (right) along which the segment sensor probes. The lines illustrated correspond to multiples of 10° to render the illustration legible. The algorithm uses multiples of 1°, which implies ten times as many paths. The detail shows that the lines are thin in the sense that pixels on diagonal lines touch only at the corner points.
Mentions: For each pixel the segment sensor probes into a number of directions in Bj (1°, …, 360° by default; this corresponds to 180 orientations). For each direction it determines the maximal length at which pixels can be found, connected by a straight line to Bj(x, y) in as illustrated in Fig 7. The pixel data of the largest line segment acquired as the union of the lines of two opposing directions is stored to L, if its length exceeds an adjustable threshold of minimal filament length (20 pixels by default). These pixel data only include the centerline pixels found in .

Bottom Line: Further, we provide a benchmark dataset of real cell images along with filaments manually marked by a human expert as well as simulated benchmark images.The FS clearly outperforms existing methods in terms of computational runtime and filament extraction accuracy.The implementation of the FS and the benchmark database are available as open source.

View Article: PubMed Central - PubMed

Affiliation: Institute for Mathematical Stochastics, Georg-August-University, 37077 Göttingen, Germany.

ABSTRACT
A reliable extraction of filament data from microscopic images is of high interest in the analysis of acto-myosin structures as early morphological markers in mechanically guided differentiation of human mesenchymal stem cells and the understanding of the underlying fiber arrangement processes. In this paper, we propose the filament sensor (FS), a fast and robust processing sequence which detects and records location, orientation, length, and width for each single filament of an image, and thus allows for the above described analysis. The extraction of these features has previously not been possible with existing methods. We evaluate the performance of the proposed FS in terms of accuracy and speed in comparison to three existing methods with respect to their limited output. Further, we provide a benchmark dataset of real cell images along with filaments manually marked by a human expert as well as simulated benchmark images. The FS clearly outperforms existing methods in terms of computational runtime and filament extraction accuracy. The implementation of the FS and the benchmark database are available as open source.

No MeSH data available.