Limits...
FindFoci: a focus detection algorithm with automated parameter training that closely matches human assignments, reduces human inconsistencies and increases speed of analysis.

Herbert AD, Carr AM, Hoffmann E - PLoS ONE (2014)

Bottom Line: The current standard to quantitate foci in open-source software is to manually determine a range of parameters based on the outcome of one or a few representative images and then apply the parameter combination to the analysis of a larger dataset.To overcome this, we developed training on multiple images, which reduces the inconsistency of using a single or a few images to set parameters for focus detection.FindFoci is provided as an open-source plugin for ImageJ.

View Article: PubMed Central - PubMed

Affiliation: MRC Genome Damage and Stability Centre, School of Life Sciences, University of Sussex, Brighton, BN1 9RQ, United Kingdom.

ABSTRACT
Accurate and reproducible quantification of the accumulation of proteins into foci in cells is essential for data interpretation and for biological inferences. To improve reproducibility, much emphasis has been placed on the preparation of samples, but less attention has been given to reporting and standardizing the quantification of foci. The current standard to quantitate foci in open-source software is to manually determine a range of parameters based on the outcome of one or a few representative images and then apply the parameter combination to the analysis of a larger dataset. Here, we demonstrate the power and utility of using machine learning to train a new algorithm (FindFoci) to determine optimal parameters. FindFoci closely matches human assignments and allows rapid automated exploration of parameter space. Thus, individuals can train the algorithm to mirror their own assignments and then automate focus counting using the same parameters across a large number of images. Using the training algorithm to match human assignments of foci, we demonstrate that applying an optimal parameter combination from a single image is not broadly applicable to analysis of other images scored by the same experimenter or by other experimenters. Our analysis thus reveals wide variation in human assignment of foci and their quantification. To overcome this, we developed training on multiple images, which reduces the inconsistency of using a single or a few images to set parameters for focus detection. FindFoci is provided as an open-source plugin for ImageJ.

No MeSH data available.


Related in: MedlinePlus

Interpretation of low intensity foci causes variation in focus quantification between experimenters.Plotted are the pixel intensity of the foci selected by experimenter P1 and P2 (left panel); P1 and P3 (middle panel); and P2 and P3 (right panel) for an example image from the dataset. Foci that were selected by both experimenters (within 8 pixels of each other) are shown as crosses (‘Match’); foci selected by experimenter P1 only are shown as a dash on the X-axis; and foci selected by experimenter P2 are shown as a dash on the Y-axis. A best fit line for the matched pairs is shown in blue.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4257716&req=5

pone-0114749-g004: Interpretation of low intensity foci causes variation in focus quantification between experimenters.Plotted are the pixel intensity of the foci selected by experimenter P1 and P2 (left panel); P1 and P3 (middle panel); and P2 and P3 (right panel) for an example image from the dataset. Foci that were selected by both experimenters (within 8 pixels of each other) are shown as crosses (‘Match’); foci selected by experimenter P1 only are shown as a dash on the X-axis; and foci selected by experimenter P2 are shown as a dash on the Y-axis. A best fit line for the matched pairs is shown in blue.

Mentions: If experimenters differ in their assessment of background noise, then one would expect faint foci to cause discordance between experimenters, whereas high-intensity foci should be picked by both experimenters in the three pairwise comparisons. To quantify whether this was the case, we plotted the intensity of the marked pixels from each experimenter in a scatter plot (a plot for a typical image is shown in Figure 4). Matches are marked with a cross; unmatched points are marked using a single intensity value and are placed on the X or Y axes for each experimenter, respectively. The majority of unmatched points on the X and Y axes are in the lower range of the pixel values (Figure 4). This shows that unmatched foci selected by only one of the experimenters tend to be the less intense maxima. A best fit line is shown for the intensity of matched pairs between experimenters (Figure 4, blue lines). Deviation from the line indicates variation in the marked centre of the focus, which should be the identical maximal value for the spot. This highlights inaccuracy in one or both experimenters in selecting the pixel with the maximal value in the focus.


FindFoci: a focus detection algorithm with automated parameter training that closely matches human assignments, reduces human inconsistencies and increases speed of analysis.

Herbert AD, Carr AM, Hoffmann E - PLoS ONE (2014)

Interpretation of low intensity foci causes variation in focus quantification between experimenters.Plotted are the pixel intensity of the foci selected by experimenter P1 and P2 (left panel); P1 and P3 (middle panel); and P2 and P3 (right panel) for an example image from the dataset. Foci that were selected by both experimenters (within 8 pixels of each other) are shown as crosses (‘Match’); foci selected by experimenter P1 only are shown as a dash on the X-axis; and foci selected by experimenter P2 are shown as a dash on the Y-axis. A best fit line for the matched pairs is shown in blue.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4257716&req=5

pone-0114749-g004: Interpretation of low intensity foci causes variation in focus quantification between experimenters.Plotted are the pixel intensity of the foci selected by experimenter P1 and P2 (left panel); P1 and P3 (middle panel); and P2 and P3 (right panel) for an example image from the dataset. Foci that were selected by both experimenters (within 8 pixels of each other) are shown as crosses (‘Match’); foci selected by experimenter P1 only are shown as a dash on the X-axis; and foci selected by experimenter P2 are shown as a dash on the Y-axis. A best fit line for the matched pairs is shown in blue.
Mentions: If experimenters differ in their assessment of background noise, then one would expect faint foci to cause discordance between experimenters, whereas high-intensity foci should be picked by both experimenters in the three pairwise comparisons. To quantify whether this was the case, we plotted the intensity of the marked pixels from each experimenter in a scatter plot (a plot for a typical image is shown in Figure 4). Matches are marked with a cross; unmatched points are marked using a single intensity value and are placed on the X or Y axes for each experimenter, respectively. The majority of unmatched points on the X and Y axes are in the lower range of the pixel values (Figure 4). This shows that unmatched foci selected by only one of the experimenters tend to be the less intense maxima. A best fit line is shown for the intensity of matched pairs between experimenters (Figure 4, blue lines). Deviation from the line indicates variation in the marked centre of the focus, which should be the identical maximal value for the spot. This highlights inaccuracy in one or both experimenters in selecting the pixel with the maximal value in the focus.

Bottom Line: The current standard to quantitate foci in open-source software is to manually determine a range of parameters based on the outcome of one or a few representative images and then apply the parameter combination to the analysis of a larger dataset.To overcome this, we developed training on multiple images, which reduces the inconsistency of using a single or a few images to set parameters for focus detection.FindFoci is provided as an open-source plugin for ImageJ.

View Article: PubMed Central - PubMed

Affiliation: MRC Genome Damage and Stability Centre, School of Life Sciences, University of Sussex, Brighton, BN1 9RQ, United Kingdom.

ABSTRACT
Accurate and reproducible quantification of the accumulation of proteins into foci in cells is essential for data interpretation and for biological inferences. To improve reproducibility, much emphasis has been placed on the preparation of samples, but less attention has been given to reporting and standardizing the quantification of foci. The current standard to quantitate foci in open-source software is to manually determine a range of parameters based on the outcome of one or a few representative images and then apply the parameter combination to the analysis of a larger dataset. Here, we demonstrate the power and utility of using machine learning to train a new algorithm (FindFoci) to determine optimal parameters. FindFoci closely matches human assignments and allows rapid automated exploration of parameter space. Thus, individuals can train the algorithm to mirror their own assignments and then automate focus counting using the same parameters across a large number of images. Using the training algorithm to match human assignments of foci, we demonstrate that applying an optimal parameter combination from a single image is not broadly applicable to analysis of other images scored by the same experimenter or by other experimenters. Our analysis thus reveals wide variation in human assignment of foci and their quantification. To overcome this, we developed training on multiple images, which reduces the inconsistency of using a single or a few images to set parameters for focus detection. FindFoci is provided as an open-source plugin for ImageJ.

No MeSH data available.


Related in: MedlinePlus