Limits...
Scalability of Asynchronous Networks Is Limited by One-to-One Mapping between Effective Connectivity and Correlations.

van Albada SJ, Helias M, Diesmann M - PLoS Comput. Biol. (2015)

Bottom Line: The one-to-one correspondence between effective connectivity and the temporal structure of pairwise averaged correlations implies that network scalings should preserve the effective connectivity if pairwise averaged correlations are to be held constant.Changes in effective connectivity can even push a network from a linearly stable to an unstable, oscillatory regime and vice versa.Our results therefore show that the reducibility of asynchronous networks is fundamentally limited.

View Article: PubMed Central - PubMed

Affiliation: Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, Jülich, Germany.

ABSTRACT
Network models are routinely downscaled compared to nature in terms of numbers of nodes or edges because of a lack of computational resources, often without explicit mention of the limitations this entails. While reliable methods have long existed to adjust parameters such that the first-order statistics of network dynamics are conserved, here we show that limitations already arise if also second-order statistics are to be maintained. The temporal structure of pairwise averaged correlations in the activity of recurrent networks is determined by the effective population-level connectivity. We first show that in general the converse is also true and explicitly mention degenerate cases when this one-to-one relationship does not hold. The one-to-one correspondence between effective connectivity and the temporal structure of pairwise averaged correlations implies that network scalings should preserve the effective connectivity if pairwise averaged correlations are to be held constant. Changes in effective connectivity can even push a network from a linearly stable to an unstable, oscillatory regime and vice versa. On this basis, we derive conditions for the preservation of both mean population-averaged activities and pairwise averaged correlations under a change in numbers of neurons or synapses in the asynchronous regime typical of cortical networks. We find that mean activities and correlation structure can be maintained by an appropriate scaling of the synaptic weights, but only over a range of numbers of synapses that is limited by the variance of external inputs to the network. Our results therefore show that the reducibility of asynchronous networks is fundamentally limited.

No MeSH data available.


Related in: MedlinePlus

Scaling synaptic strengths as J ∝ 1/K can preserve correlations in networks with widely different firing rates.Results of simulations of a LIF network consisting of one excitatory and one inhibitory population (Table 2). Average cross-covariances are determined with a resolution of 0.1 ms and are shown for excitatory-inhibitory neuron pairs. Each network receives a balanced Poisson drive with excitatory and inhibitory rates both given by , where  is chosen to maintain the working point of the full-scale network. The synaptic strengths for the external drive are 0.1 mV and −0.1 mV for excitatory and inhibitory synapses, respectively. A DC drive with strength μext is similarly adjusted to maintain the full-scale working point. All networks are simulated for 100 s. For each population, cross-covariances are computed as averages over all neuron pairs across two disjoint groups of 𝓝 × 1000 neurons, where 𝓝 is the scaling factor for the number of neurons (a given pair has one neuron in each group). Autocovariances are computed as averages over 100 neurons in each population. A, B Reducing in-degrees K to 50% while the number of neurons N is held constant, J ∝ 1/K closely preserves both the size and the shape of the covariances, while  diminishes their size. C, D Reducing both N and K to 50%, covariance sizes scale with 1/N for J ∝ 1/K but with a different factor for . Dashed curves represent theoretical predictions. The insets show mean autocovariances for time lags Δ ∈ (−30, 30) ms.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4556689&req=5

pcbi.1004490.g005: Scaling synaptic strengths as J ∝ 1/K can preserve correlations in networks with widely different firing rates.Results of simulations of a LIF network consisting of one excitatory and one inhibitory population (Table 2). Average cross-covariances are determined with a resolution of 0.1 ms and are shown for excitatory-inhibitory neuron pairs. Each network receives a balanced Poisson drive with excitatory and inhibitory rates both given by , where is chosen to maintain the working point of the full-scale network. The synaptic strengths for the external drive are 0.1 mV and −0.1 mV for excitatory and inhibitory synapses, respectively. A DC drive with strength μext is similarly adjusted to maintain the full-scale working point. All networks are simulated for 100 s. For each population, cross-covariances are computed as averages over all neuron pairs across two disjoint groups of 𝓝 × 1000 neurons, where 𝓝 is the scaling factor for the number of neurons (a given pair has one neuron in each group). Autocovariances are computed as averages over 100 neurons in each population. A, B Reducing in-degrees K to 50% while the number of neurons N is held constant, J ∝ 1/K closely preserves both the size and the shape of the covariances, while diminishes their size. C, D Reducing both N and K to 50%, covariance sizes scale with 1/N for J ∝ 1/K but with a different factor for . Dashed curves represent theoretical predictions. The insets show mean autocovariances for time lags Δ ∈ (−30, 30) ms.

Mentions: Fig 5 demonstrates the robustness of J ∝ 1/K scaling to the firing rate of the network. In this example, both the full-scale network and the downscaled networks receive a balanced Poisson drive producing the desired variance, while the mean input is provided by a DC drive. By changing the parameters of the external drive, we create two networks each with irregular spiking but with widely different mean rates (3.3 spikes/s and 29.6 spikes/s). Downscaling only the number of synapses but not the number of neurons, both the temporal structure and the size of the correlations are closely preserved. Reducing the in-degrees and the number of neurons N by the same factor, the correlations are scaled by 1/N. Hence, the correlations of the full-scale network of size N0 can be estimated simply by multiplying those of the reduced network by N/N0. In contrast, changes correlation sizes even when N is held constant, and combined scaling of N and K can therefore not simply be compensated for by the factor N/N0. In the high-rate network, the spiking statistics of the neurons is non-Poissonian, as seen from the gap in the autocorrelations (insets in Fig 5B, 5D). Nevertheless, J ∝ 1/K preserves the correlations more closely than , showing that the predicted scaling properties hold beyond the strict domain of validity of the underlying theory.


Scalability of Asynchronous Networks Is Limited by One-to-One Mapping between Effective Connectivity and Correlations.

van Albada SJ, Helias M, Diesmann M - PLoS Comput. Biol. (2015)

Scaling synaptic strengths as J ∝ 1/K can preserve correlations in networks with widely different firing rates.Results of simulations of a LIF network consisting of one excitatory and one inhibitory population (Table 2). Average cross-covariances are determined with a resolution of 0.1 ms and are shown for excitatory-inhibitory neuron pairs. Each network receives a balanced Poisson drive with excitatory and inhibitory rates both given by , where  is chosen to maintain the working point of the full-scale network. The synaptic strengths for the external drive are 0.1 mV and −0.1 mV for excitatory and inhibitory synapses, respectively. A DC drive with strength μext is similarly adjusted to maintain the full-scale working point. All networks are simulated for 100 s. For each population, cross-covariances are computed as averages over all neuron pairs across two disjoint groups of 𝓝 × 1000 neurons, where 𝓝 is the scaling factor for the number of neurons (a given pair has one neuron in each group). Autocovariances are computed as averages over 100 neurons in each population. A, B Reducing in-degrees K to 50% while the number of neurons N is held constant, J ∝ 1/K closely preserves both the size and the shape of the covariances, while  diminishes their size. C, D Reducing both N and K to 50%, covariance sizes scale with 1/N for J ∝ 1/K but with a different factor for . Dashed curves represent theoretical predictions. The insets show mean autocovariances for time lags Δ ∈ (−30, 30) ms.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4556689&req=5

pcbi.1004490.g005: Scaling synaptic strengths as J ∝ 1/K can preserve correlations in networks with widely different firing rates.Results of simulations of a LIF network consisting of one excitatory and one inhibitory population (Table 2). Average cross-covariances are determined with a resolution of 0.1 ms and are shown for excitatory-inhibitory neuron pairs. Each network receives a balanced Poisson drive with excitatory and inhibitory rates both given by , where is chosen to maintain the working point of the full-scale network. The synaptic strengths for the external drive are 0.1 mV and −0.1 mV for excitatory and inhibitory synapses, respectively. A DC drive with strength μext is similarly adjusted to maintain the full-scale working point. All networks are simulated for 100 s. For each population, cross-covariances are computed as averages over all neuron pairs across two disjoint groups of 𝓝 × 1000 neurons, where 𝓝 is the scaling factor for the number of neurons (a given pair has one neuron in each group). Autocovariances are computed as averages over 100 neurons in each population. A, B Reducing in-degrees K to 50% while the number of neurons N is held constant, J ∝ 1/K closely preserves both the size and the shape of the covariances, while diminishes their size. C, D Reducing both N and K to 50%, covariance sizes scale with 1/N for J ∝ 1/K but with a different factor for . Dashed curves represent theoretical predictions. The insets show mean autocovariances for time lags Δ ∈ (−30, 30) ms.
Mentions: Fig 5 demonstrates the robustness of J ∝ 1/K scaling to the firing rate of the network. In this example, both the full-scale network and the downscaled networks receive a balanced Poisson drive producing the desired variance, while the mean input is provided by a DC drive. By changing the parameters of the external drive, we create two networks each with irregular spiking but with widely different mean rates (3.3 spikes/s and 29.6 spikes/s). Downscaling only the number of synapses but not the number of neurons, both the temporal structure and the size of the correlations are closely preserved. Reducing the in-degrees and the number of neurons N by the same factor, the correlations are scaled by 1/N. Hence, the correlations of the full-scale network of size N0 can be estimated simply by multiplying those of the reduced network by N/N0. In contrast, changes correlation sizes even when N is held constant, and combined scaling of N and K can therefore not simply be compensated for by the factor N/N0. In the high-rate network, the spiking statistics of the neurons is non-Poissonian, as seen from the gap in the autocorrelations (insets in Fig 5B, 5D). Nevertheless, J ∝ 1/K preserves the correlations more closely than , showing that the predicted scaling properties hold beyond the strict domain of validity of the underlying theory.

Bottom Line: The one-to-one correspondence between effective connectivity and the temporal structure of pairwise averaged correlations implies that network scalings should preserve the effective connectivity if pairwise averaged correlations are to be held constant.Changes in effective connectivity can even push a network from a linearly stable to an unstable, oscillatory regime and vice versa.Our results therefore show that the reducibility of asynchronous networks is fundamentally limited.

View Article: PubMed Central - PubMed

Affiliation: Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA BRAIN Institute I, Jülich Research Centre, Jülich, Germany.

ABSTRACT
Network models are routinely downscaled compared to nature in terms of numbers of nodes or edges because of a lack of computational resources, often without explicit mention of the limitations this entails. While reliable methods have long existed to adjust parameters such that the first-order statistics of network dynamics are conserved, here we show that limitations already arise if also second-order statistics are to be maintained. The temporal structure of pairwise averaged correlations in the activity of recurrent networks is determined by the effective population-level connectivity. We first show that in general the converse is also true and explicitly mention degenerate cases when this one-to-one relationship does not hold. The one-to-one correspondence between effective connectivity and the temporal structure of pairwise averaged correlations implies that network scalings should preserve the effective connectivity if pairwise averaged correlations are to be held constant. Changes in effective connectivity can even push a network from a linearly stable to an unstable, oscillatory regime and vice versa. On this basis, we derive conditions for the preservation of both mean population-averaged activities and pairwise averaged correlations under a change in numbers of neurons or synapses in the asynchronous regime typical of cortical networks. We find that mean activities and correlation structure can be maintained by an appropriate scaling of the synaptic weights, but only over a range of numbers of synapses that is limited by the variance of external inputs to the network. Our results therefore show that the reducibility of asynchronous networks is fundamentally limited.

No MeSH data available.


Related in: MedlinePlus