Limits...
VoICE: A semi-automated pipeline for standardizing vocal analysis across models.

Burkett ZD, Day NF, Peñagarikano O, Geschwind DH, White SA - Sci Rep (2015)

Bottom Line: When applied to birdsong, a key model for vocal learning, VoICE captures the known deterioration in acoustic properties that follows deafening, including altered sequencing.In a mammalian neurodevelopmental model, we uncover a reduced vocal repertoire of mice lacking the autism susceptibility gene, Cntnap2.VoICE will be useful to the scientific community as it can standardize vocalization analyses across species and laboratories.

View Article: PubMed Central - PubMed

Affiliation: 1] Department of Integrative Biology &Physiology, University of California, Los Angeles, California 90095 [2] Interdepartmental Program in Molecular, Cellular, &Integrative Physiology, University of California, Los Angeles, California 90095.

ABSTRACT
The study of vocal communication in animal models provides key insight to the neurogenetic basis for speech and communication disorders. Current methods for vocal analysis suffer from a lack of standardization, creating ambiguity in cross-laboratory and cross-species comparisons. Here, we present VoICE (Vocal Inventory Clustering Engine), an approach to grouping vocal elements by creating a high dimensionality dataset through scoring spectral similarity between all vocalizations within a recording session. This dataset is then subjected to hierarchical clustering, generating a dendrogram that is pruned into meaningful vocalization "types" by an automated algorithm. When applied to birdsong, a key model for vocal learning, VoICE captures the known deterioration in acoustic properties that follows deafening, including altered sequencing. In a mammalian neurodevelopmental model, we uncover a reduced vocal repertoire of mice lacking the autism susceptibility gene, Cntnap2. VoICE will be useful to the scientific community as it can standardize vocalization analyses across species and laboratories.

No MeSH data available.


Related in: MedlinePlus

VoICE detects deafening-induced alterations in song phonology and syntax.(a) Spectrograms reveal song deterioration in deafened, but not sham-deafened, birds. (b) Syllables are assigned in a temporally-reversed serial manner to account for ongoing changes in syllable structure. (c) Syllable entropy, a measure of spectral ‘noise’, increases in a majority of syllables after deafening. Asterisks denote statistically significant changes from before surgery (left). Bar plots represent Pre (Day 0) vs. Post* (the first day statistically significantly different from ‘Pre’) vs. Post (the last analyzed day) recordings. Each symbol and line (left) and its corresponding pair of bars (right) represent a syllable cluster (right). (One-way resampling ANOVA, multiple comparisons post-hoc Bonferroni corrected p-value < 0.05) (d) Syntax similarity to pre-surgery decreases following deafening. (Black = sham; blue, red = deaf, * = p < 0.05 resampling independent mean differences. Scale bars = 250 msec in a and b.)
© Copyright Policy - open-access
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4446892&req=5

f2: VoICE detects deafening-induced alterations in song phonology and syntax.(a) Spectrograms reveal song deterioration in deafened, but not sham-deafened, birds. (b) Syllables are assigned in a temporally-reversed serial manner to account for ongoing changes in syllable structure. (c) Syllable entropy, a measure of spectral ‘noise’, increases in a majority of syllables after deafening. Asterisks denote statistically significant changes from before surgery (left). Bar plots represent Pre (Day 0) vs. Post* (the first day statistically significantly different from ‘Pre’) vs. Post (the last analyzed day) recordings. Each symbol and line (left) and its corresponding pair of bars (right) represent a syllable cluster (right). (One-way resampling ANOVA, multiple comparisons post-hoc Bonferroni corrected p-value < 0.05) (d) Syntax similarity to pre-surgery decreases following deafening. (Black = sham; blue, red = deaf, * = p < 0.05 resampling independent mean differences. Scale bars = 250 msec in a and b.)

Mentions: Like humans, zebra finches require auditory feedback to maintain mature vocalizations, and the degradation of zebra finch song structure and syntax in the absence of hearing is well characterized1718. To demonstrate the utility of VoICE in tracking changes to vocalizations, two adult zebra finches (>120d) were deafened. Song deterioration and syntax impairment were evaluated over a 4-month time frame. Representative spectrograms illustrate the stereotypy of mature zebra finch song (Fig. 2a, pre-) and the variability in the time course of deafening-induced song changes (Fig. 2a, post-). Initial clusters were assembled from a pre-deafening singing epoch (Fig. 1a, top). Syllables from the first analyzed time point following deafening were then assigned to the pre-deafening clusters. For each subsequent time point, the first ~300 syllables from each day were assigned using the most recently clustered session (Fig. 2b). As syllables degraded, the global similarity floor was manually lowered to 35 to enable continual assignment, reduce tiebreaking, and prevent novel syllable classification. After all time points were clustered, Wiener entropy (Fig. 2c), and syntax similarity (Fig. 2d) were examined (for additional acoustic measures, see Fig. S2). As expected, syllable structure and syntax from a control bird (sham-deafened) were relatively unchanged throughout the recordings. In similarly-aged deafened birds, statistically significant changes to syllables were observed within 20 days (one-way resampling ANOVA, multiple comparisons post-hoc Bonferroni corrected p-value < 0.05). In comparison, changes to the syllables of the sham-deafened bird were smaller, in a different direction, and occurred after ~80 days, possibly reflecting ongoing behavioral precision with aging. The songs of the two deafened birds deteriorated in different domains – one had significant decreases in the entropy of his syllables consistent with syllable degradation (Fig. 2d, blue), whereas the other bird showed substantial decay in syntax (Fig. 2d, red), but only minor phonological changes. Both phenomena have been previously observed following deafening in this species, supporting the ability of VoICE in capturing key facets of birdsong1718192021.


VoICE: A semi-automated pipeline for standardizing vocal analysis across models.

Burkett ZD, Day NF, Peñagarikano O, Geschwind DH, White SA - Sci Rep (2015)

VoICE detects deafening-induced alterations in song phonology and syntax.(a) Spectrograms reveal song deterioration in deafened, but not sham-deafened, birds. (b) Syllables are assigned in a temporally-reversed serial manner to account for ongoing changes in syllable structure. (c) Syllable entropy, a measure of spectral ‘noise’, increases in a majority of syllables after deafening. Asterisks denote statistically significant changes from before surgery (left). Bar plots represent Pre (Day 0) vs. Post* (the first day statistically significantly different from ‘Pre’) vs. Post (the last analyzed day) recordings. Each symbol and line (left) and its corresponding pair of bars (right) represent a syllable cluster (right). (One-way resampling ANOVA, multiple comparisons post-hoc Bonferroni corrected p-value < 0.05) (d) Syntax similarity to pre-surgery decreases following deafening. (Black = sham; blue, red = deaf, * = p < 0.05 resampling independent mean differences. Scale bars = 250 msec in a and b.)
© Copyright Policy - open-access
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4446892&req=5

f2: VoICE detects deafening-induced alterations in song phonology and syntax.(a) Spectrograms reveal song deterioration in deafened, but not sham-deafened, birds. (b) Syllables are assigned in a temporally-reversed serial manner to account for ongoing changes in syllable structure. (c) Syllable entropy, a measure of spectral ‘noise’, increases in a majority of syllables after deafening. Asterisks denote statistically significant changes from before surgery (left). Bar plots represent Pre (Day 0) vs. Post* (the first day statistically significantly different from ‘Pre’) vs. Post (the last analyzed day) recordings. Each symbol and line (left) and its corresponding pair of bars (right) represent a syllable cluster (right). (One-way resampling ANOVA, multiple comparisons post-hoc Bonferroni corrected p-value < 0.05) (d) Syntax similarity to pre-surgery decreases following deafening. (Black = sham; blue, red = deaf, * = p < 0.05 resampling independent mean differences. Scale bars = 250 msec in a and b.)
Mentions: Like humans, zebra finches require auditory feedback to maintain mature vocalizations, and the degradation of zebra finch song structure and syntax in the absence of hearing is well characterized1718. To demonstrate the utility of VoICE in tracking changes to vocalizations, two adult zebra finches (>120d) were deafened. Song deterioration and syntax impairment were evaluated over a 4-month time frame. Representative spectrograms illustrate the stereotypy of mature zebra finch song (Fig. 2a, pre-) and the variability in the time course of deafening-induced song changes (Fig. 2a, post-). Initial clusters were assembled from a pre-deafening singing epoch (Fig. 1a, top). Syllables from the first analyzed time point following deafening were then assigned to the pre-deafening clusters. For each subsequent time point, the first ~300 syllables from each day were assigned using the most recently clustered session (Fig. 2b). As syllables degraded, the global similarity floor was manually lowered to 35 to enable continual assignment, reduce tiebreaking, and prevent novel syllable classification. After all time points were clustered, Wiener entropy (Fig. 2c), and syntax similarity (Fig. 2d) were examined (for additional acoustic measures, see Fig. S2). As expected, syllable structure and syntax from a control bird (sham-deafened) were relatively unchanged throughout the recordings. In similarly-aged deafened birds, statistically significant changes to syllables were observed within 20 days (one-way resampling ANOVA, multiple comparisons post-hoc Bonferroni corrected p-value < 0.05). In comparison, changes to the syllables of the sham-deafened bird were smaller, in a different direction, and occurred after ~80 days, possibly reflecting ongoing behavioral precision with aging. The songs of the two deafened birds deteriorated in different domains – one had significant decreases in the entropy of his syllables consistent with syllable degradation (Fig. 2d, blue), whereas the other bird showed substantial decay in syntax (Fig. 2d, red), but only minor phonological changes. Both phenomena have been previously observed following deafening in this species, supporting the ability of VoICE in capturing key facets of birdsong1718192021.

Bottom Line: When applied to birdsong, a key model for vocal learning, VoICE captures the known deterioration in acoustic properties that follows deafening, including altered sequencing.In a mammalian neurodevelopmental model, we uncover a reduced vocal repertoire of mice lacking the autism susceptibility gene, Cntnap2.VoICE will be useful to the scientific community as it can standardize vocalization analyses across species and laboratories.

View Article: PubMed Central - PubMed

Affiliation: 1] Department of Integrative Biology &Physiology, University of California, Los Angeles, California 90095 [2] Interdepartmental Program in Molecular, Cellular, &Integrative Physiology, University of California, Los Angeles, California 90095.

ABSTRACT
The study of vocal communication in animal models provides key insight to the neurogenetic basis for speech and communication disorders. Current methods for vocal analysis suffer from a lack of standardization, creating ambiguity in cross-laboratory and cross-species comparisons. Here, we present VoICE (Vocal Inventory Clustering Engine), an approach to grouping vocal elements by creating a high dimensionality dataset through scoring spectral similarity between all vocalizations within a recording session. This dataset is then subjected to hierarchical clustering, generating a dendrogram that is pruned into meaningful vocalization "types" by an automated algorithm. When applied to birdsong, a key model for vocal learning, VoICE captures the known deterioration in acoustic properties that follows deafening, including altered sequencing. In a mammalian neurodevelopmental model, we uncover a reduced vocal repertoire of mice lacking the autism susceptibility gene, Cntnap2. VoICE will be useful to the scientific community as it can standardize vocalization analyses across species and laboratories.

No MeSH data available.


Related in: MedlinePlus