Limits...
A theory of power laws in human reaction times: insights from an information-processing approach.

Medina JM, Díaz JA, Norwich KH - Front Hum Neurosci (2014)

View Article: PubMed Central - PubMed

Affiliation: Departamento de Óptica, Facultad de Ciencias, Universidad de Granada Granada, Spain.

AUTOMATICALLY GENERATED EXCERPT
Please rate it.

RT has been a fundamental measure of the sensory-motor latency at suprathreshold conditions for more than a century and is one of the hallmarks of human performance in everyday tasks (Luce, ; Meyer et al., )... Some examples are the measurement of RTs in sports science, driving safety or in aging... The probability density function (pdf) is often heavy-tailed and can lead to an asymptotic power-law distribution in the right tail (Holden et al., ; Moscoso del Prado Martín, ; Sigman et al., ). (2) RT variability (e.g., variance) is not bounded and usually shows a power relation with the mean, with an exponent β close to unity (Luce, ; Wagenmakers and Brown, ; Holden et al., ; Medina and Díaz,, )... And (3), the mean RTs decay as the stimulus strength increases (Cattell, ), an issue that is well-described by a truncated power function written in the form of Piéron's law (Piéron,, ; Luce, ): tn + 1 indicates the mean RT, S is the stimulus strength (e.g., loudness intensity, odor concentration, etc.), tn represents the asymptotic component of the mean RT reached at very high stimulus strength and d and p are two parameters (Luce, )... The H-function evolves from a previous state of maximum uncertainty reached at the encoding time t0, H (1/t0), to a final adapting stage with a lower uncertainty H (1/tn + 1) where a reaction occurs, (tn + 1 >t0)... Maximum production of entropy and then, a reduction of uncertainty in ΔH as a function of time are concepts introduced from statistical physics, the latter as expressed by Boltzmann (Norwich, )... The exponent p usually takes non-integer values and could indicate a signature of self-organized criticality in a phase transition (Kinouchi and Copelli, )... Here the concept of phase transition does not deal with the classical view of different states of matter in thermodynamics (e.g., liquid vs. gas), but with different states of connectivity between neurons as modeled by branching processes (Kinouchi and Copelli, )... If RTs are longer than the asymptotic term, tn, the RT pdf is distributed as a power law with an exponent γ that depends on the exponent p of Piéron's law (Medina, ):γ = 1 + (c/p), c being a constant... Two different regimes are observed: for those values p > 0.6 the central moments diverge and if p ≤ 0.6 they are finite (Medina, )... Therefore, long RTs compared to the asymptotic term tn are considered intermittent events over time... Third property, the reciprocal of Piéron's law is invariant under rescaling (Chater and Brown, ; Medina, )... Taking the reciprocal of the mean RT, R = 1/tn + 1. and the reciprocal of the irreducible asymptotic term, Rmax = 1/tn in Equation (4), then, R = Rmax [1 + (S0/S)]... A power law relationship between variance and mean of the stimulus population has been proposed in the H-function (Norwich, ) and this relationship could be compatible with the RT variance-mean relationship in the regime around p > 0.6 (Medina,, ).

No MeSH data available.


Related in: MedlinePlus

(A) Schematic representation of the information entropy function H (1/t) (in bits) as a function of the time t (Norwich, 1993). The transfer of information ΔH is defined in Equation (2) from the encoding time t0 until a reaction occurs at tn + 1. (a.u.) = arbitrary units. (B) Schematic representation of a model of hyperbolic growth in reaction times based on Piéron's law and analogous to Michaelis-Menten kinetics in biochemistry (i.e., the Hill equation) (Pins and Bonnet, 1996). In Michaelis-Menten kinetics, an enzyme E is bounded to a substrate U to form a complex EU that is converted into a product D and the enzyme E. In Piéron's law, those neurons tuned at the time tn are bounded to those neurons that perform the formation of an internal threshold S0 in bn = (S0/S)p to form the term tn bn that is converted into the product tn bn plus the time tn. Red double arrows indicate that the “reaction” is reversible whereas green single arrows indicate that the “reaction” goes only in one way.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4129233&req=5

Figure 1: (A) Schematic representation of the information entropy function H (1/t) (in bits) as a function of the time t (Norwich, 1993). The transfer of information ΔH is defined in Equation (2) from the encoding time t0 until a reaction occurs at tn + 1. (a.u.) = arbitrary units. (B) Schematic representation of a model of hyperbolic growth in reaction times based on Piéron's law and analogous to Michaelis-Menten kinetics in biochemistry (i.e., the Hill equation) (Pins and Bonnet, 1996). In Michaelis-Menten kinetics, an enzyme E is bounded to a substrate U to form a complex EU that is converted into a product D and the enzyme E. In Piéron's law, those neurons tuned at the time tn are bounded to those neurons that perform the formation of an internal threshold S0 in bn = (S0/S)p to form the term tn bn that is converted into the product tn bn plus the time tn. Red double arrows indicate that the “reaction” is reversible whereas green single arrows indicate that the “reaction” goes only in one way.

Mentions: Figure 1A represents the entropy function H in Equation (2). At least two stages can be differentiated. The H-function evolves from a previous state of maximum uncertainty reached at the encoding time t0, H (1/t0), to a final adapting stage with a lower uncertainty H (1/tn + 1) where a reaction occurs, (tn + 1 >t0). Maximum production of entropy and then, a reduction of uncertainty in ΔH as a function of time are concepts introduced from statistical physics, the latter as expressed by Boltzmann (Norwich, 1993). Based on an analytical model of the H-function (Norwich, 1993), the gain of information ΔH is connected with the formation of an internal threshold in Equation (1) (Norwich et al., 1989; Medina, 2009):


A theory of power laws in human reaction times: insights from an information-processing approach.

Medina JM, Díaz JA, Norwich KH - Front Hum Neurosci (2014)

(A) Schematic representation of the information entropy function H (1/t) (in bits) as a function of the time t (Norwich, 1993). The transfer of information ΔH is defined in Equation (2) from the encoding time t0 until a reaction occurs at tn + 1. (a.u.) = arbitrary units. (B) Schematic representation of a model of hyperbolic growth in reaction times based on Piéron's law and analogous to Michaelis-Menten kinetics in biochemistry (i.e., the Hill equation) (Pins and Bonnet, 1996). In Michaelis-Menten kinetics, an enzyme E is bounded to a substrate U to form a complex EU that is converted into a product D and the enzyme E. In Piéron's law, those neurons tuned at the time tn are bounded to those neurons that perform the formation of an internal threshold S0 in bn = (S0/S)p to form the term tn bn that is converted into the product tn bn plus the time tn. Red double arrows indicate that the “reaction” is reversible whereas green single arrows indicate that the “reaction” goes only in one way.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4129233&req=5

Figure 1: (A) Schematic representation of the information entropy function H (1/t) (in bits) as a function of the time t (Norwich, 1993). The transfer of information ΔH is defined in Equation (2) from the encoding time t0 until a reaction occurs at tn + 1. (a.u.) = arbitrary units. (B) Schematic representation of a model of hyperbolic growth in reaction times based on Piéron's law and analogous to Michaelis-Menten kinetics in biochemistry (i.e., the Hill equation) (Pins and Bonnet, 1996). In Michaelis-Menten kinetics, an enzyme E is bounded to a substrate U to form a complex EU that is converted into a product D and the enzyme E. In Piéron's law, those neurons tuned at the time tn are bounded to those neurons that perform the formation of an internal threshold S0 in bn = (S0/S)p to form the term tn bn that is converted into the product tn bn plus the time tn. Red double arrows indicate that the “reaction” is reversible whereas green single arrows indicate that the “reaction” goes only in one way.
Mentions: Figure 1A represents the entropy function H in Equation (2). At least two stages can be differentiated. The H-function evolves from a previous state of maximum uncertainty reached at the encoding time t0, H (1/t0), to a final adapting stage with a lower uncertainty H (1/tn + 1) where a reaction occurs, (tn + 1 >t0). Maximum production of entropy and then, a reduction of uncertainty in ΔH as a function of time are concepts introduced from statistical physics, the latter as expressed by Boltzmann (Norwich, 1993). Based on an analytical model of the H-function (Norwich, 1993), the gain of information ΔH is connected with the formation of an internal threshold in Equation (1) (Norwich et al., 1989; Medina, 2009):

View Article: PubMed Central - PubMed

Affiliation: Departamento de Óptica, Facultad de Ciencias, Universidad de Granada Granada, Spain.

AUTOMATICALLY GENERATED EXCERPT
Please rate it.

RT has been a fundamental measure of the sensory-motor latency at suprathreshold conditions for more than a century and is one of the hallmarks of human performance in everyday tasks (Luce, ; Meyer et al., )... Some examples are the measurement of RTs in sports science, driving safety or in aging... The probability density function (pdf) is often heavy-tailed and can lead to an asymptotic power-law distribution in the right tail (Holden et al., ; Moscoso del Prado Martín, ; Sigman et al., ). (2) RT variability (e.g., variance) is not bounded and usually shows a power relation with the mean, with an exponent β close to unity (Luce, ; Wagenmakers and Brown, ; Holden et al., ; Medina and Díaz,, )... And (3), the mean RTs decay as the stimulus strength increases (Cattell, ), an issue that is well-described by a truncated power function written in the form of Piéron's law (Piéron,, ; Luce, ): tn + 1 indicates the mean RT, S is the stimulus strength (e.g., loudness intensity, odor concentration, etc.), tn represents the asymptotic component of the mean RT reached at very high stimulus strength and d and p are two parameters (Luce, )... The H-function evolves from a previous state of maximum uncertainty reached at the encoding time t0, H (1/t0), to a final adapting stage with a lower uncertainty H (1/tn + 1) where a reaction occurs, (tn + 1 >t0)... Maximum production of entropy and then, a reduction of uncertainty in ΔH as a function of time are concepts introduced from statistical physics, the latter as expressed by Boltzmann (Norwich, )... The exponent p usually takes non-integer values and could indicate a signature of self-organized criticality in a phase transition (Kinouchi and Copelli, )... Here the concept of phase transition does not deal with the classical view of different states of matter in thermodynamics (e.g., liquid vs. gas), but with different states of connectivity between neurons as modeled by branching processes (Kinouchi and Copelli, )... If RTs are longer than the asymptotic term, tn, the RT pdf is distributed as a power law with an exponent γ that depends on the exponent p of Piéron's law (Medina, ):γ = 1 + (c/p), c being a constant... Two different regimes are observed: for those values p > 0.6 the central moments diverge and if p ≤ 0.6 they are finite (Medina, )... Therefore, long RTs compared to the asymptotic term tn are considered intermittent events over time... Third property, the reciprocal of Piéron's law is invariant under rescaling (Chater and Brown, ; Medina, )... Taking the reciprocal of the mean RT, R = 1/tn + 1. and the reciprocal of the irreducible asymptotic term, Rmax = 1/tn in Equation (4), then, R = Rmax [1 + (S0/S)]... A power law relationship between variance and mean of the stimulus population has been proposed in the H-function (Norwich, ) and this relationship could be compatible with the RT variance-mean relationship in the regime around p > 0.6 (Medina,, ).

No MeSH data available.


Related in: MedlinePlus