Limits...
Criticality Maximizes Complexity in Neural Tissue

View Article: PubMed Central - PubMed

ABSTRACT

The analysis of neural systems leverages tools from many different fields. Drawing on techniques from the study of critical phenomena in statistical mechanics, several studies have reported signatures of criticality in neural systems, including power-law distributions, shape collapses, and optimized quantities under tuning. Independently, neural complexity—an information theoretic measure—has been introduced in an effort to quantify the strength of correlations across multiple scales in a neural system. This measure represents an important tool in complex systems research because it allows for the quantification of the complexity of a neural system. In this analysis, we studied the relationships between neural complexity and criticality in neural culture data. We analyzed neural avalanches in 435 recordings from dissociated hippocampal cultures produced from rats, as well as neural avalanches from a cortical branching model. We utilized recently developed maximum likelihood estimation power-law fitting methods that account for doubly truncated power-laws, an automated shape collapse algorithm, and neural complexity and branching ratio calculation methods that account for sub-sampling, all of which are implemented in the freely available Neural Complexity and Criticality MATLAB toolbox. We found evidence that neural systems operate at or near a critical point and that neural complexity is optimized in these neural systems at or near the critical point. Surprisingly, we found evidence that complexity in neural systems is dependent upon avalanche profiles and neuron firing rate, but not precise spiking relationships between neurons. In order to facilitate future research, we made all of the culture data utilized in this analysis freely available online.

No MeSH data available.


Sub-sampling algorithm. (A–C) Power-law fits for avalanche size distributions for a full example cortical branching model (A), a sub-sampled version of the same model with 68 neurons (B), and a sub-sampled version of the same model with 40 neurons (C). (D) Scatter plot of power-law fit exponents vs. inverse sub-sample size. A weighted linear least squares fit produces an extrapolation of the value for infinite system size at the y-intercept. Highlighted data points correspond to fits from (A–C) via color coding.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC5037237&req=5

Figure 4: Sub-sampling algorithm. (A–C) Power-law fits for avalanche size distributions for a full example cortical branching model (A), a sub-sampled version of the same model with 68 neurons (B), and a sub-sampled version of the same model with 40 neurons (C). (D) Scatter plot of power-law fit exponents vs. inverse sub-sample size. A weighted linear least squares fit produces an extrapolation of the value for infinite system size at the y-intercept. Highlighted data points correspond to fits from (A–C) via color coding.

Mentions: The specific sub-sampling routine we utilized is as follows (Figure 4): We randomly sub-sampled each system 30 times at evenly spaced systems sizes from 40% of the recorded system size to the full recorded system size. We then calculated the relevant values of interest for the sub-sampled system (e.g., size distribution power-law fits). These analyses were identical to those performed in the whole recorded system as described above with the exception of the power-law MLE fits. We found that the power-law fit search algorithm was unstable under sub-sampling. So, for sub-sampling the power-law MLE fits, we utilized the fit value for all avalanches that survived the minimum size/duration and minimum occurrence cuts. Next, we plotted the values vs. the inverse neuron number for the given sub-sample. We fit these data using a linear weighted least squares fit. The number of neurons for each sub-sample was used as the weight for the fitting. This fitting procedure was applied to all sub-samples in all analyses with the exception of the complexity data from the cortical branching model. We used a different fitting method for these data because they exhibited a discontinuity in the complexity trend. For those data, we fit the 10 largest sub-samples as described above. Progressively smaller sub-samples were added and fit until the newest point produced a fit residual that was larger than the mean residual plus 3 standard deviations of the residuals. Following the fitting, in all cases the y-intercept (i.e., 1/N = 0) was interpreted as an estimate for the quantity of interest in an infinite or very large system.


Criticality Maximizes Complexity in Neural Tissue
Sub-sampling algorithm. (A–C) Power-law fits for avalanche size distributions for a full example cortical branching model (A), a sub-sampled version of the same model with 68 neurons (B), and a sub-sampled version of the same model with 40 neurons (C). (D) Scatter plot of power-law fit exponents vs. inverse sub-sample size. A weighted linear least squares fit produces an extrapolation of the value for infinite system size at the y-intercept. Highlighted data points correspond to fits from (A–C) via color coding.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC5037237&req=5

Figure 4: Sub-sampling algorithm. (A–C) Power-law fits for avalanche size distributions for a full example cortical branching model (A), a sub-sampled version of the same model with 68 neurons (B), and a sub-sampled version of the same model with 40 neurons (C). (D) Scatter plot of power-law fit exponents vs. inverse sub-sample size. A weighted linear least squares fit produces an extrapolation of the value for infinite system size at the y-intercept. Highlighted data points correspond to fits from (A–C) via color coding.
Mentions: The specific sub-sampling routine we utilized is as follows (Figure 4): We randomly sub-sampled each system 30 times at evenly spaced systems sizes from 40% of the recorded system size to the full recorded system size. We then calculated the relevant values of interest for the sub-sampled system (e.g., size distribution power-law fits). These analyses were identical to those performed in the whole recorded system as described above with the exception of the power-law MLE fits. We found that the power-law fit search algorithm was unstable under sub-sampling. So, for sub-sampling the power-law MLE fits, we utilized the fit value for all avalanches that survived the minimum size/duration and minimum occurrence cuts. Next, we plotted the values vs. the inverse neuron number for the given sub-sample. We fit these data using a linear weighted least squares fit. The number of neurons for each sub-sample was used as the weight for the fitting. This fitting procedure was applied to all sub-samples in all analyses with the exception of the complexity data from the cortical branching model. We used a different fitting method for these data because they exhibited a discontinuity in the complexity trend. For those data, we fit the 10 largest sub-samples as described above. Progressively smaller sub-samples were added and fit until the newest point produced a fit residual that was larger than the mean residual plus 3 standard deviations of the residuals. Following the fitting, in all cases the y-intercept (i.e., 1/N = 0) was interpreted as an estimate for the quantity of interest in an infinite or very large system.

View Article: PubMed Central - PubMed

ABSTRACT

The analysis of neural systems leverages tools from many different fields. Drawing on techniques from the study of critical phenomena in statistical mechanics, several studies have reported signatures of criticality in neural systems, including power-law distributions, shape collapses, and optimized quantities under tuning. Independently, neural complexity—an information theoretic measure—has been introduced in an effort to quantify the strength of correlations across multiple scales in a neural system. This measure represents an important tool in complex systems research because it allows for the quantification of the complexity of a neural system. In this analysis, we studied the relationships between neural complexity and criticality in neural culture data. We analyzed neural avalanches in 435 recordings from dissociated hippocampal cultures produced from rats, as well as neural avalanches from a cortical branching model. We utilized recently developed maximum likelihood estimation power-law fitting methods that account for doubly truncated power-laws, an automated shape collapse algorithm, and neural complexity and branching ratio calculation methods that account for sub-sampling, all of which are implemented in the freely available Neural Complexity and Criticality MATLAB toolbox. We found evidence that neural systems operate at or near a critical point and that neural complexity is optimized in these neural systems at or near the critical point. Surprisingly, we found evidence that complexity in neural systems is dependent upon avalanche profiles and neuron firing rate, but not precise spiking relationships between neurons. In order to facilitate future research, we made all of the culture data utilized in this analysis freely available online.

No MeSH data available.