Limits...
A unified framework for spiking and gap-junction interactions in distributed neuronal network simulations.

Hahne J, Helias M, Kunkel S, Igarashi J, Bolten M, Frommer A, Diesmann M - Front Neuroinform (2015)

Bottom Line: This approach is well-suited for simulations that employ only chemical synapses but it has so far impeded the incorporation of gap-junction models, which require instantaneous neuronal interactions.To show that the unified framework for gap-junction and spiking interactions achieves high performance and delivers high accuracy in the presence of gap junctions, we present benchmarks for workstations, clusters, and supercomputers.Finally, we discuss limitations of the novel technology.

View Article: PubMed Central - PubMed

Affiliation: Department of Mathematics and Science, Bergische Universität Wuppertal Wuppertal, Germany.

ABSTRACT
Contemporary simulators for networks of point and few-compartment model neurons come with a plethora of ready-to-use neuron and synapse models and support complex network topologies. Recent technological advancements have broadened the spectrum of application further to the efficient simulation of brain-scale networks on supercomputers. In distributed network simulations the amount of spike data that accrues per millisecond and process is typically low, such that a common optimization strategy is to communicate spikes at relatively long intervals, where the upper limit is given by the shortest synaptic transmission delay in the network. This approach is well-suited for simulations that employ only chemical synapses but it has so far impeded the incorporation of gap-junction models, which require instantaneous neuronal interactions. Here, we present a numerical algorithm based on a waveform-relaxation technique which allows for network simulations with gap junctions in a way that is compatible with the delayed communication strategy. Using a reference implementation in the NEST simulator, we demonstrate that the algorithm and the required data structures can be smoothly integrated with existing code such that they complement the infrastructure for spiking connections. To show that the unified framework for gap-junction and spiking interactions achieves high performance and delivers high accuracy in the presence of gap junctions, we present benchmarks for workstations, clusters, and supercomputers. Finally, we discuss limitations of the novel technology.

No MeSH data available.


Related in: MedlinePlus

Overhead of gap-junction framework for network with only chemical synapses. VP denotes the overall number of processes used in line with our distribution strategy described in Section 2.4. In this and all subsequent figures shades of blue indicate the JUQUEEN supercomputer. (A) Triangles show the maximum network size that can be simulated in the absence of gap junctions (Test case 3). Circles show the corresponding wall-clock time required to simulate the network for 1 s of biological time. Dark blue symbols indicate the results with the 4th generation simulation kernel of NEST without the gap-junction framework and blue curves and symbols are obtained with the framework included. (B) Increase of time (blue circles) and memory consumption (blue triangles) due to the gap-junction framework in percent compared to the 4th generation simulation kernel.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4563270&req=5

Figure 11: Overhead of gap-junction framework for network with only chemical synapses. VP denotes the overall number of processes used in line with our distribution strategy described in Section 2.4. In this and all subsequent figures shades of blue indicate the JUQUEEN supercomputer. (A) Triangles show the maximum network size that can be simulated in the absence of gap junctions (Test case 3). Circles show the corresponding wall-clock time required to simulate the network for 1 s of biological time. Dark blue symbols indicate the results with the 4th generation simulation kernel of NEST without the gap-junction framework and blue curves and symbols are obtained with the framework included. (B) Increase of time (blue circles) and memory consumption (blue triangles) due to the gap-junction framework in percent compared to the 4th generation simulation kernel.

Mentions: We employ the balanced random network model (Brunel, 2000) to investigate the former issue (Test case 3) and measure the deviation in simulation time and memory usage due to the inclusion of the framework. Figure 11 shows the network in a maximum-filling scenario, where for a given machine size VP we simulate the largest possible network that completely fills the memory of the machine. Although the simulation scenario is maximum filling, we were able to simulate the same network size as before as the increase in memory usage is within the safety margin of our maximum-filling procedure (see Kunkel et al., 2014 for details on the procedure). Measured in percentage of the prior memory usage (Figure 11B) the memory consumption increases by 0.6–2.7 percent. The run time of the simulation increases by 0.5–3.8 percent. The small increase of memory usage is caused by the changes to the thread-local connection infrastructure and the communication buffer described in Sections 2.2.1 and 2.2.2. In case of primary events only (no use of gap junctions) the only extra data member is primary_end, which only affects the connection container called HetConnector. As the HetConnector is only instantiated if there are two or more synapse types targeting neurons on a given machine and having the same source neuron, this additional data member is irrelevant in the limit of large machines (sparse limit), where practically all connections are stored in HomConnectors; the latter containers only hold connections of identical types and do not have the additional data member primary_end. The small increase of the run time is due to an additional check for existence of secondary connections, which has to be done during the delivery of the events. The check is done directly after retrieving the pointer address from the sparse table and does not require additional memory as this information is encoded in redundant bits of the pointer address itself (see Section 2.2.1 for details). The reduced increase of the run time at higher numbers of virtual processes VP is due to the prolonged simulation time, as some part of the overhead is caused by the initialization in the beginning of the simulation.


A unified framework for spiking and gap-junction interactions in distributed neuronal network simulations.

Hahne J, Helias M, Kunkel S, Igarashi J, Bolten M, Frommer A, Diesmann M - Front Neuroinform (2015)

Overhead of gap-junction framework for network with only chemical synapses. VP denotes the overall number of processes used in line with our distribution strategy described in Section 2.4. In this and all subsequent figures shades of blue indicate the JUQUEEN supercomputer. (A) Triangles show the maximum network size that can be simulated in the absence of gap junctions (Test case 3). Circles show the corresponding wall-clock time required to simulate the network for 1 s of biological time. Dark blue symbols indicate the results with the 4th generation simulation kernel of NEST without the gap-junction framework and blue curves and symbols are obtained with the framework included. (B) Increase of time (blue circles) and memory consumption (blue triangles) due to the gap-junction framework in percent compared to the 4th generation simulation kernel.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4563270&req=5

Figure 11: Overhead of gap-junction framework for network with only chemical synapses. VP denotes the overall number of processes used in line with our distribution strategy described in Section 2.4. In this and all subsequent figures shades of blue indicate the JUQUEEN supercomputer. (A) Triangles show the maximum network size that can be simulated in the absence of gap junctions (Test case 3). Circles show the corresponding wall-clock time required to simulate the network for 1 s of biological time. Dark blue symbols indicate the results with the 4th generation simulation kernel of NEST without the gap-junction framework and blue curves and symbols are obtained with the framework included. (B) Increase of time (blue circles) and memory consumption (blue triangles) due to the gap-junction framework in percent compared to the 4th generation simulation kernel.
Mentions: We employ the balanced random network model (Brunel, 2000) to investigate the former issue (Test case 3) and measure the deviation in simulation time and memory usage due to the inclusion of the framework. Figure 11 shows the network in a maximum-filling scenario, where for a given machine size VP we simulate the largest possible network that completely fills the memory of the machine. Although the simulation scenario is maximum filling, we were able to simulate the same network size as before as the increase in memory usage is within the safety margin of our maximum-filling procedure (see Kunkel et al., 2014 for details on the procedure). Measured in percentage of the prior memory usage (Figure 11B) the memory consumption increases by 0.6–2.7 percent. The run time of the simulation increases by 0.5–3.8 percent. The small increase of memory usage is caused by the changes to the thread-local connection infrastructure and the communication buffer described in Sections 2.2.1 and 2.2.2. In case of primary events only (no use of gap junctions) the only extra data member is primary_end, which only affects the connection container called HetConnector. As the HetConnector is only instantiated if there are two or more synapse types targeting neurons on a given machine and having the same source neuron, this additional data member is irrelevant in the limit of large machines (sparse limit), where practically all connections are stored in HomConnectors; the latter containers only hold connections of identical types and do not have the additional data member primary_end. The small increase of the run time is due to an additional check for existence of secondary connections, which has to be done during the delivery of the events. The check is done directly after retrieving the pointer address from the sparse table and does not require additional memory as this information is encoded in redundant bits of the pointer address itself (see Section 2.2.1 for details). The reduced increase of the run time at higher numbers of virtual processes VP is due to the prolonged simulation time, as some part of the overhead is caused by the initialization in the beginning of the simulation.

Bottom Line: This approach is well-suited for simulations that employ only chemical synapses but it has so far impeded the incorporation of gap-junction models, which require instantaneous neuronal interactions.To show that the unified framework for gap-junction and spiking interactions achieves high performance and delivers high accuracy in the presence of gap junctions, we present benchmarks for workstations, clusters, and supercomputers.Finally, we discuss limitations of the novel technology.

View Article: PubMed Central - PubMed

Affiliation: Department of Mathematics and Science, Bergische Universität Wuppertal Wuppertal, Germany.

ABSTRACT
Contemporary simulators for networks of point and few-compartment model neurons come with a plethora of ready-to-use neuron and synapse models and support complex network topologies. Recent technological advancements have broadened the spectrum of application further to the efficient simulation of brain-scale networks on supercomputers. In distributed network simulations the amount of spike data that accrues per millisecond and process is typically low, such that a common optimization strategy is to communicate spikes at relatively long intervals, where the upper limit is given by the shortest synaptic transmission delay in the network. This approach is well-suited for simulations that employ only chemical synapses but it has so far impeded the incorporation of gap-junction models, which require instantaneous neuronal interactions. Here, we present a numerical algorithm based on a waveform-relaxation technique which allows for network simulations with gap junctions in a way that is compatible with the delayed communication strategy. Using a reference implementation in the NEST simulator, we demonstrate that the algorithm and the required data structures can be smoothly integrated with existing code such that they complement the infrastructure for spiking connections. To show that the unified framework for gap-junction and spiking interactions achieves high performance and delivers high accuracy in the presence of gap junctions, we present benchmarks for workstations, clusters, and supercomputers. Finally, we discuss limitations of the novel technology.

No MeSH data available.


Related in: MedlinePlus