Limits...
Intrinsic neuronal properties switch the mode of information transmission in networks.

Gjorgjieva J, Mease RA, Moody WJ, Fairhall AL - PLoS Comput. Biol. (2014)

Bottom Line: Depending on the neurons' intrinsic properties, noise plays different roles in modulating neuronal input-output curves, which can dramatically impact network transmission.The developmental change in intrinsic properties supports a transformation of a networks function from the propagation of network-wide information to one in which computations are scaled to local activity.This work underscores the significance of simple changes in conductance parameters in governing how neurons represent and propagate information, and suggests a role for background synaptic noise in switching the mode of information transmission.

View Article: PubMed Central - PubMed

Affiliation: Center for Brain Science, Harvard University, Cambridge, Massachusetts, United States of America.

ABSTRACT
Diverse ion channels and their dynamics endow single neurons with complex biophysical properties. These properties determine the heterogeneity of cell types that make up the brain, as constituents of neural circuits tuned to perform highly specific computations. How do biophysical properties of single neurons impact network function? We study a set of biophysical properties that emerge in cortical neurons during the first week of development, eventually allowing these neurons to adaptively scale the gain of their response to the amplitude of the fluctuations they encounter. During the same time period, these same neurons participate in large-scale waves of spontaneously generated electrical activity. We investigate the potential role of experimentally observed changes in intrinsic neuronal properties in determining the ability of cortical networks to propagate waves of activity. We show that such changes can strongly affect the ability of multi-layered feedforward networks to represent and transmit information on multiple timescales. With properties modeled on those observed at early stages of development, neurons are relatively insensitive to rapid fluctuations and tend to fire synchronously in response to wave-like events of large amplitude. Following developmental changes in voltage-dependent conductances, these same neurons become efficient encoders of fast input fluctuations over few layers, but lose the ability to transmit slower, population-wide input variations across many layers. Depending on the neurons' intrinsic properties, noise plays different roles in modulating neuronal input-output curves, which can dramatically impact network transmission. The developmental change in intrinsic properties supports a transformation of a networks function from the propagation of network-wide information to one in which computations are scaled to local activity. This work underscores the significance of simple changes in conductance parameters in governing how neurons represent and propagate information, and suggests a role for background synaptic noise in switching the mode of information transmission.

Show MeSH

Related in: MedlinePlus

LN models and – curves for gain-scaling (GS) and nongain-scaling (NGS) neurons.A. The nonlinearities in the LN model framework for a GS (red) ( pS/µm2 and  pS/µm2) and a NGS (blue) ( pS/µm2 and  pS/µm2) neuron simulated as conductance-based model neurons (Eq. 2). The nonlinearities were computed using Bayes' rule: , where  is the neuron's mean firing rate and  is the linearly filtered stimulus (see also Eq. 7 in Methods). B. The same nonlinearities as A, in stimulus units scaled by  (magnitude of stimulus fluctuations). The nonlinearities overlap for GS neurons over a wide range of . C–D. The – curves for a NGS (C) and a GS neuron (D) for different values of . E. The output entropy as a function of the mean (DC) and  (amplitude of fast fluctuations). F. Information about the output firing rate of the neurons as a function of .
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4256072&req=5

pcbi-1003962-g001: LN models and – curves for gain-scaling (GS) and nongain-scaling (NGS) neurons.A. The nonlinearities in the LN model framework for a GS (red) ( pS/µm2 and pS/µm2) and a NGS (blue) ( pS/µm2 and pS/µm2) neuron simulated as conductance-based model neurons (Eq. 2). The nonlinearities were computed using Bayes' rule: , where is the neuron's mean firing rate and is the linearly filtered stimulus (see also Eq. 7 in Methods). B. The same nonlinearities as A, in stimulus units scaled by (magnitude of stimulus fluctuations). The nonlinearities overlap for GS neurons over a wide range of . C–D. The – curves for a NGS (C) and a GS neuron (D) for different values of . E. The output entropy as a function of the mean (DC) and (amplitude of fast fluctuations). F. Information about the output firing rate of the neurons as a function of .

Mentions: Repeating this procedure for noise stimuli with a range of standard deviations () produces a family of curves for both neuron types (Figure 1A). While the linear feature is relatively constant as a function of the magnitude of the rapid fluctuations, , the nonlinear input-output curves change, similar to experimental observations in single neurons in cortical slices [8]. When the input is normalized by , the mature neurons have a common input-output curve with respect to the normalized stimulus (Figure 1B, red) [8] over a wide range of input DC. In contrast, the input-output curves of immature neurons have a different slope when compared in units of the normalized stimulus (Figure 1B, blue). Gain scaling has previously been shown to support a high rate of information transmission about stimulus fluctuations in the face of changing stimulus amplitude [1]. Indeed, these GS neurons have higher output entropy, and therefore transmit more information, than NGS neurons (Figure 1E). The output entropy is approximately constant regardless of for a range of mean (DC) inputs – this is a hallmark of their gain-scaling ability. The changing shape of the input-output curve for the NGS neurons results in an increasing output entropy as a function of (Figure 1E). With the addition of DC, the output entropy of the NGS neurons' firing eventually approaches that of the GS neurons; this is accompanied with a simultaneous decrease in the distance between rest and threshold membrane potential of the NGS neurons as shown previously [8]. Thus, GS neurons are better at encoding fast fluctuations, a property which might enable efficient local computation independent of the background signal amplitude in more mature circuits after waves disappear.


Intrinsic neuronal properties switch the mode of information transmission in networks.

Gjorgjieva J, Mease RA, Moody WJ, Fairhall AL - PLoS Comput. Biol. (2014)

LN models and – curves for gain-scaling (GS) and nongain-scaling (NGS) neurons.A. The nonlinearities in the LN model framework for a GS (red) ( pS/µm2 and  pS/µm2) and a NGS (blue) ( pS/µm2 and  pS/µm2) neuron simulated as conductance-based model neurons (Eq. 2). The nonlinearities were computed using Bayes' rule: , where  is the neuron's mean firing rate and  is the linearly filtered stimulus (see also Eq. 7 in Methods). B. The same nonlinearities as A, in stimulus units scaled by  (magnitude of stimulus fluctuations). The nonlinearities overlap for GS neurons over a wide range of . C–D. The – curves for a NGS (C) and a GS neuron (D) for different values of . E. The output entropy as a function of the mean (DC) and  (amplitude of fast fluctuations). F. Information about the output firing rate of the neurons as a function of .
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4256072&req=5

pcbi-1003962-g001: LN models and – curves for gain-scaling (GS) and nongain-scaling (NGS) neurons.A. The nonlinearities in the LN model framework for a GS (red) ( pS/µm2 and pS/µm2) and a NGS (blue) ( pS/µm2 and pS/µm2) neuron simulated as conductance-based model neurons (Eq. 2). The nonlinearities were computed using Bayes' rule: , where is the neuron's mean firing rate and is the linearly filtered stimulus (see also Eq. 7 in Methods). B. The same nonlinearities as A, in stimulus units scaled by (magnitude of stimulus fluctuations). The nonlinearities overlap for GS neurons over a wide range of . C–D. The – curves for a NGS (C) and a GS neuron (D) for different values of . E. The output entropy as a function of the mean (DC) and (amplitude of fast fluctuations). F. Information about the output firing rate of the neurons as a function of .
Mentions: Repeating this procedure for noise stimuli with a range of standard deviations () produces a family of curves for both neuron types (Figure 1A). While the linear feature is relatively constant as a function of the magnitude of the rapid fluctuations, , the nonlinear input-output curves change, similar to experimental observations in single neurons in cortical slices [8]. When the input is normalized by , the mature neurons have a common input-output curve with respect to the normalized stimulus (Figure 1B, red) [8] over a wide range of input DC. In contrast, the input-output curves of immature neurons have a different slope when compared in units of the normalized stimulus (Figure 1B, blue). Gain scaling has previously been shown to support a high rate of information transmission about stimulus fluctuations in the face of changing stimulus amplitude [1]. Indeed, these GS neurons have higher output entropy, and therefore transmit more information, than NGS neurons (Figure 1E). The output entropy is approximately constant regardless of for a range of mean (DC) inputs – this is a hallmark of their gain-scaling ability. The changing shape of the input-output curve for the NGS neurons results in an increasing output entropy as a function of (Figure 1E). With the addition of DC, the output entropy of the NGS neurons' firing eventually approaches that of the GS neurons; this is accompanied with a simultaneous decrease in the distance between rest and threshold membrane potential of the NGS neurons as shown previously [8]. Thus, GS neurons are better at encoding fast fluctuations, a property which might enable efficient local computation independent of the background signal amplitude in more mature circuits after waves disappear.

Bottom Line: Depending on the neurons' intrinsic properties, noise plays different roles in modulating neuronal input-output curves, which can dramatically impact network transmission.The developmental change in intrinsic properties supports a transformation of a networks function from the propagation of network-wide information to one in which computations are scaled to local activity.This work underscores the significance of simple changes in conductance parameters in governing how neurons represent and propagate information, and suggests a role for background synaptic noise in switching the mode of information transmission.

View Article: PubMed Central - PubMed

Affiliation: Center for Brain Science, Harvard University, Cambridge, Massachusetts, United States of America.

ABSTRACT
Diverse ion channels and their dynamics endow single neurons with complex biophysical properties. These properties determine the heterogeneity of cell types that make up the brain, as constituents of neural circuits tuned to perform highly specific computations. How do biophysical properties of single neurons impact network function? We study a set of biophysical properties that emerge in cortical neurons during the first week of development, eventually allowing these neurons to adaptively scale the gain of their response to the amplitude of the fluctuations they encounter. During the same time period, these same neurons participate in large-scale waves of spontaneously generated electrical activity. We investigate the potential role of experimentally observed changes in intrinsic neuronal properties in determining the ability of cortical networks to propagate waves of activity. We show that such changes can strongly affect the ability of multi-layered feedforward networks to represent and transmit information on multiple timescales. With properties modeled on those observed at early stages of development, neurons are relatively insensitive to rapid fluctuations and tend to fire synchronously in response to wave-like events of large amplitude. Following developmental changes in voltage-dependent conductances, these same neurons become efficient encoders of fast input fluctuations over few layers, but lose the ability to transmit slower, population-wide input variations across many layers. Depending on the neurons' intrinsic properties, noise plays different roles in modulating neuronal input-output curves, which can dramatically impact network transmission. The developmental change in intrinsic properties supports a transformation of a networks function from the propagation of network-wide information to one in which computations are scaled to local activity. This work underscores the significance of simple changes in conductance parameters in governing how neurons represent and propagate information, and suggests a role for background synaptic noise in switching the mode of information transmission.

Show MeSH
Related in: MedlinePlus