Limits...
A model of top-down gain control in the auditory system.

Schneider BA, Parker S, Murphy D - Atten Percept Psychophys (2011)

Bottom Line: There were three 20-session conditions: (1) four soft tones (25, 30, 35, and 40 dB SPL) in the set; (2) those four soft tones plus a 50-dB SPL tone; and (3) the four soft tones plus an 80-dB SPL tone.The results were well described by a top-down, nonlinear gain-control system in which the amplifier's gain depended on the highest intensity in the stimulus set.Individual participants' identification judgments were generally compatible with an equal-variance signal-detection model in which the mean locations of the distribution of effects along the decision axis were determined by the operation of this nonlinear amplification system.

View Article: PubMed Central - PubMed

Affiliation: Department of Psychology, University of Toronto Mississauga, 3359 Mississauga Rd., Mississauga, ON, L5L 1C6, Canada. bruce.schneider@utoronto.ca

ABSTRACT
To evaluate a model of top-down gain control in the auditory system, 6 participants were asked to identify 1-kHz pure tones differing only in intensity. There were three 20-session conditions: (1) four soft tones (25, 30, 35, and 40 dB SPL) in the set; (2) those four soft tones plus a 50-dB SPL tone; and (3) the four soft tones plus an 80-dB SPL tone. The results were well described by a top-down, nonlinear gain-control system in which the amplifier's gain depended on the highest intensity in the stimulus set. Individual participants' identification judgments were generally compatible with an equal-variance signal-detection model in which the mean locations of the distribution of effects along the decision axis were determined by the operation of this nonlinear amplification system.

Show MeSH
Frequency histogram for the aggregation of 10,000 samples from each of four normal distributions having the same mean (μ = 0) but different standard deviations (σ = 0.25, 0.50, 0.75, and 1.0, respectively). The smooth curve fit to this histogram is what we would expect if the aggregate data were generated from a single Laplace distribution
© Copyright Policy
Related In: Results  -  Collection


getmorefigures.php?uid=PMC3118000&req=5

Fig7: Frequency histogram for the aggregation of 10,000 samples from each of four normal distributions having the same mean (μ = 0) but different standard deviations (σ = 0.25, 0.50, 0.75, and 1.0, respectively). The smooth curve fit to this histogram is what we would expect if the aggregate data were generated from a single Laplace distribution

Mentions: GEV versus LEV models In signal-detection analyses of one-dimensional, m-alternative, AI experiments, it is usually assumed that the m stimuli give rise to m equal-variance Gaussian distributions along a unidimensional decision axis (see Macmillan & Creelman, 2005) . However, Parker et al. (2002), Gordon and Schneider (2007), and Murphy, Schneider, and Bailey (2010) have argued that equal-variance Laplace distributions provide a better fit to group AI data. This result is somewhat counterintuitive, especially if the distribution of effects along the decision axis are thought to arise from noise (or an accumulation of a large number of small errors) in the decision process, which, according to the central limit theorem, should give rise to Gaussian-shaped distributions. Schneider (2007) has shown that the Laplace distribution will provide a better description than the Gaussian distribution when responses to stimuli in an AI experiment are aggregated over participants with unequal sensory acuities, even when each individual participant’s data are equal-variance Gaussian. The LEV model would also fit better than the GEV model if data are aggregated over sessions within an individual if there are substantial changes in discriminability over time. To illustrate why this happens, in Fig. 7 we aggregated 10,000 random samples from each of four normal distributions having the same mean (μ = 0) but four different standard deviations (σ = 0.25, 0.50, 0.75, and 1.00, respectively) and plotted the histogram of the combined random samples (n = 40,000). Figure 7 shows that the Laplace distribution provides a very good fit to the aggregate of random samples from normal distributions having the same mean but different standard deviations. Hence, this is what we would expect the distribution of effects along the decision axis to look like if we aggregated responses to a stimulus across four individuals having different sensory acuities (different standard deviations) or within an individual when acuity is changing from session to session. Therefore, when discriminability is constant both across and within participants, we would expect the GEV model to provide the better fit if the decision process was based on equal-variance normal distributions. Because there is abundant evidence that discrimination performance differs across individuals (e.g., Nizami, Reimer, & Jesteadt, 2001), we should expect the LEV model to fit group data better than the GEV model.Fig. 7


A model of top-down gain control in the auditory system.

Schneider BA, Parker S, Murphy D - Atten Percept Psychophys (2011)

Frequency histogram for the aggregation of 10,000 samples from each of four normal distributions having the same mean (μ = 0) but different standard deviations (σ = 0.25, 0.50, 0.75, and 1.0, respectively). The smooth curve fit to this histogram is what we would expect if the aggregate data were generated from a single Laplace distribution
© Copyright Policy
Related In: Results  -  Collection

Show All Figures
getmorefigures.php?uid=PMC3118000&req=5

Fig7: Frequency histogram for the aggregation of 10,000 samples from each of four normal distributions having the same mean (μ = 0) but different standard deviations (σ = 0.25, 0.50, 0.75, and 1.0, respectively). The smooth curve fit to this histogram is what we would expect if the aggregate data were generated from a single Laplace distribution
Mentions: GEV versus LEV models In signal-detection analyses of one-dimensional, m-alternative, AI experiments, it is usually assumed that the m stimuli give rise to m equal-variance Gaussian distributions along a unidimensional decision axis (see Macmillan & Creelman, 2005) . However, Parker et al. (2002), Gordon and Schneider (2007), and Murphy, Schneider, and Bailey (2010) have argued that equal-variance Laplace distributions provide a better fit to group AI data. This result is somewhat counterintuitive, especially if the distribution of effects along the decision axis are thought to arise from noise (or an accumulation of a large number of small errors) in the decision process, which, according to the central limit theorem, should give rise to Gaussian-shaped distributions. Schneider (2007) has shown that the Laplace distribution will provide a better description than the Gaussian distribution when responses to stimuli in an AI experiment are aggregated over participants with unequal sensory acuities, even when each individual participant’s data are equal-variance Gaussian. The LEV model would also fit better than the GEV model if data are aggregated over sessions within an individual if there are substantial changes in discriminability over time. To illustrate why this happens, in Fig. 7 we aggregated 10,000 random samples from each of four normal distributions having the same mean (μ = 0) but four different standard deviations (σ = 0.25, 0.50, 0.75, and 1.00, respectively) and plotted the histogram of the combined random samples (n = 40,000). Figure 7 shows that the Laplace distribution provides a very good fit to the aggregate of random samples from normal distributions having the same mean but different standard deviations. Hence, this is what we would expect the distribution of effects along the decision axis to look like if we aggregated responses to a stimulus across four individuals having different sensory acuities (different standard deviations) or within an individual when acuity is changing from session to session. Therefore, when discriminability is constant both across and within participants, we would expect the GEV model to provide the better fit if the decision process was based on equal-variance normal distributions. Because there is abundant evidence that discrimination performance differs across individuals (e.g., Nizami, Reimer, & Jesteadt, 2001), we should expect the LEV model to fit group data better than the GEV model.Fig. 7

Bottom Line: There were three 20-session conditions: (1) four soft tones (25, 30, 35, and 40 dB SPL) in the set; (2) those four soft tones plus a 50-dB SPL tone; and (3) the four soft tones plus an 80-dB SPL tone.The results were well described by a top-down, nonlinear gain-control system in which the amplifier's gain depended on the highest intensity in the stimulus set.Individual participants' identification judgments were generally compatible with an equal-variance signal-detection model in which the mean locations of the distribution of effects along the decision axis were determined by the operation of this nonlinear amplification system.

View Article: PubMed Central - PubMed

Affiliation: Department of Psychology, University of Toronto Mississauga, 3359 Mississauga Rd., Mississauga, ON, L5L 1C6, Canada. bruce.schneider@utoronto.ca

ABSTRACT
To evaluate a model of top-down gain control in the auditory system, 6 participants were asked to identify 1-kHz pure tones differing only in intensity. There were three 20-session conditions: (1) four soft tones (25, 30, 35, and 40 dB SPL) in the set; (2) those four soft tones plus a 50-dB SPL tone; and (3) the four soft tones plus an 80-dB SPL tone. The results were well described by a top-down, nonlinear gain-control system in which the amplifier's gain depended on the highest intensity in the stimulus set. Individual participants' identification judgments were generally compatible with an equal-variance signal-detection model in which the mean locations of the distribution of effects along the decision axis were determined by the operation of this nonlinear amplification system.

Show MeSH