A compressed sensing perspective of hippocampal function.
Bottom Line:
Input from the cortex passes through convergent axon pathways to the downstream hippocampal subregions and, after being appropriately processed, is fanned out back to the cortex.In this work, hippocampus related regions and their respective circuitry are presented as a CS-based system whose different components collaborate to realize efficient memory encoding and decoding processes.This proposition introduces a unifying mathematical framework for hippocampal function and opens new avenues for exploring coding and decoding strategies in the brain.
View Article:
PubMed Central - PubMed
Affiliation: Computational Biology Lab, Institute of Molecular Biology and Biotechnology, Foundation for Research and Technology-Hellas Heraklion, Greece.
ABSTRACT
Hippocampus is one of the most important information processing units in the brain. Input from the cortex passes through convergent axon pathways to the downstream hippocampal subregions and, after being appropriately processed, is fanned out back to the cortex. Here, we review evidence of the hypothesis that information flow and processing in the hippocampus complies with the principles of Compressed Sensing (CS). The CS theory comprises a mathematical framework that describes how and under which conditions, restricted sampling of information (data set) can lead to condensed, yet concise, forms of the initial, subsampled information entity (i.e., of the original data set). In this work, hippocampus related regions and their respective circuitry are presented as a CS-based system whose different components collaborate to realize efficient memory encoding and decoding processes. This proposition introduces a unifying mathematical framework for hippocampal function and opens new avenues for exploring coding and decoding strategies in the brain. No MeSH data available. Related in: MedlinePlus |
Related In:
Results -
Collection
License getmorefigures.php?uid=PMC4126371&req=5
Mentions: The CS theory originates from the field of high-dimensional statistics. Recent advances in this field have led to a powerful, yet extremely simple methodology for dealing with the curse of dimensionality, termed Random Projections (RP). This entails a random projection of data patterns from high dimensional spaces to lower ones (Baraniuk, 2011), which reduces the dimensionality while retaining the valuable content of the original data, allowing for efficient processing in the lower dimensional space. What the CS theory adds to this framework is that, once the data with high dimensionality are represented by sparse components of a suitable basis set, it is possible to reconstruct them by their RPs! Thus, low dimensional RPs are not only suitable for interpreting the original, high dimensional data patterns but also comprise an efficient encoding that can be used as a compressed representation of the original data; high dimensional patterns can then be recovered by appropriate decoding processes. Figure 1A depicts 3D objects and their 2D shadows which can be parallelized with the high dimensional data and lower dimensional RPs, respectively. CS theory implies that it is possible to infer the form of the 3D structure using only a limited set of 2D shadows (random projections) of the wired frames. |
View Article: PubMed Central - PubMed
Affiliation: Computational Biology Lab, Institute of Molecular Biology and Biotechnology, Foundation for Research and Technology-Hellas Heraklion, Greece.
No MeSH data available.