Limits...
Temporal Integration of Auditory Information Is Invariant to Temporal Grouping Cues(1,2,3).

Liu AS, Tsunada J, Gold JI, Cohen YE - eNeuro (2015)

Bottom Line: Auditory perception depends on the temporal structure of incoming acoustic stimuli.We designed a novel discrimination task that required human listeners to decide whether a sequence of tone bursts was increasing or decreasing in frequency.We manipulated temporal perceptual-grouping cues by changing the time interval between the tone bursts, which led to listeners hearing the sequences as a single sound for short intervals or discrete sounds for longer intervals.

View Article: PubMed Central - HTML - PubMed

Affiliation: Bioengineering Graduate Group.

ABSTRACT
Auditory perception depends on the temporal structure of incoming acoustic stimuli. Here, we examined whether a temporal manipulation that affects the perceptual grouping also affects the time dependence of decisions regarding those stimuli. We designed a novel discrimination task that required human listeners to decide whether a sequence of tone bursts was increasing or decreasing in frequency. We manipulated temporal perceptual-grouping cues by changing the time interval between the tone bursts, which led to listeners hearing the sequences as a single sound for short intervals or discrete sounds for longer intervals. Despite these strong perceptual differences, this manipulation did not affect the efficiency of how auditory information was integrated over time to form a decision. Instead, the grouping manipulation affected subjects' speed-accuracy trade-offs. These results indicate that the temporal dynamics of evidence accumulation for auditory perceptual decisions can be invariant to manipulations that affect the perceptual grouping of the evidence.

No MeSH data available.


Related in: MedlinePlus

Performance on the variable-duration task. A−C, Psychometric data are plotted as a function of listening duration for different coherences and IBIs, as indicated. Each data point reflects mean performance for all five subjects as a function of coherence and signal time (plotted in 0.2 s bins, up to 1.0 s, but fit using unbinned data). The solid curves are fits from the best-fitting model with two parameters: drift rate and accumulation leak. D, E, Best-fitting values of drift rate (D) and accumulation leak (E) plotted as a function of IBI for fits to data from individual subjects (black) or combined across all subjects (red). Dark lines and symbols indicate that the model fits were improved significantly by fitting the given parameter separately for each IBI condition (likelihood-ratio test, p < 0.01q, Bonferroni-corrected for two parameters). Shaded lines and symbols indicate that the model fits were not improved significantly.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4596088&req=5

Figure 8: Performance on the variable-duration task. A−C, Psychometric data are plotted as a function of listening duration for different coherences and IBIs, as indicated. Each data point reflects mean performance for all five subjects as a function of coherence and signal time (plotted in 0.2 s bins, up to 1.0 s, but fit using unbinned data). The solid curves are fits from the best-fitting model with two parameters: drift rate and accumulation leak. D, E, Best-fitting values of drift rate (D) and accumulation leak (E) plotted as a function of IBI for fits to data from individual subjects (black) or combined across all subjects (red). Dark lines and symbols indicate that the model fits were improved significantly by fitting the given parameter separately for each IBI condition (likelihood-ratio test, p < 0.01q, Bonferroni-corrected for two parameters). Shaded lines and symbols indicate that the model fits were not improved significantly.

Mentions: Mean performance accuracy for all five subjects improved systematically as a function of both coherence and signal time, in a manner that was qualitatively similar for all three IBI conditions (Fig. 8A−C). For each condition, accuracy tended to reach an upper asymptote of >99% correct in <1000 ms of signal time for the highest coherences; accuracy rose steadily at longer listening times for lower coherences.


Temporal Integration of Auditory Information Is Invariant to Temporal Grouping Cues(1,2,3).

Liu AS, Tsunada J, Gold JI, Cohen YE - eNeuro (2015)

Performance on the variable-duration task. A−C, Psychometric data are plotted as a function of listening duration for different coherences and IBIs, as indicated. Each data point reflects mean performance for all five subjects as a function of coherence and signal time (plotted in 0.2 s bins, up to 1.0 s, but fit using unbinned data). The solid curves are fits from the best-fitting model with two parameters: drift rate and accumulation leak. D, E, Best-fitting values of drift rate (D) and accumulation leak (E) plotted as a function of IBI for fits to data from individual subjects (black) or combined across all subjects (red). Dark lines and symbols indicate that the model fits were improved significantly by fitting the given parameter separately for each IBI condition (likelihood-ratio test, p < 0.01q, Bonferroni-corrected for two parameters). Shaded lines and symbols indicate that the model fits were not improved significantly.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4596088&req=5

Figure 8: Performance on the variable-duration task. A−C, Psychometric data are plotted as a function of listening duration for different coherences and IBIs, as indicated. Each data point reflects mean performance for all five subjects as a function of coherence and signal time (plotted in 0.2 s bins, up to 1.0 s, but fit using unbinned data). The solid curves are fits from the best-fitting model with two parameters: drift rate and accumulation leak. D, E, Best-fitting values of drift rate (D) and accumulation leak (E) plotted as a function of IBI for fits to data from individual subjects (black) or combined across all subjects (red). Dark lines and symbols indicate that the model fits were improved significantly by fitting the given parameter separately for each IBI condition (likelihood-ratio test, p < 0.01q, Bonferroni-corrected for two parameters). Shaded lines and symbols indicate that the model fits were not improved significantly.
Mentions: Mean performance accuracy for all five subjects improved systematically as a function of both coherence and signal time, in a manner that was qualitatively similar for all three IBI conditions (Fig. 8A−C). For each condition, accuracy tended to reach an upper asymptote of >99% correct in <1000 ms of signal time for the highest coherences; accuracy rose steadily at longer listening times for lower coherences.

Bottom Line: Auditory perception depends on the temporal structure of incoming acoustic stimuli.We designed a novel discrimination task that required human listeners to decide whether a sequence of tone bursts was increasing or decreasing in frequency.We manipulated temporal perceptual-grouping cues by changing the time interval between the tone bursts, which led to listeners hearing the sequences as a single sound for short intervals or discrete sounds for longer intervals.

View Article: PubMed Central - HTML - PubMed

Affiliation: Bioengineering Graduate Group.

ABSTRACT
Auditory perception depends on the temporal structure of incoming acoustic stimuli. Here, we examined whether a temporal manipulation that affects the perceptual grouping also affects the time dependence of decisions regarding those stimuli. We designed a novel discrimination task that required human listeners to decide whether a sequence of tone bursts was increasing or decreasing in frequency. We manipulated temporal perceptual-grouping cues by changing the time interval between the tone bursts, which led to listeners hearing the sequences as a single sound for short intervals or discrete sounds for longer intervals. Despite these strong perceptual differences, this manipulation did not affect the efficiency of how auditory information was integrated over time to form a decision. Instead, the grouping manipulation affected subjects' speed-accuracy trade-offs. These results indicate that the temporal dynamics of evidence accumulation for auditory perceptual decisions can be invariant to manipulations that affect the perceptual grouping of the evidence.

No MeSH data available.


Related in: MedlinePlus