Network architecture underlying maximal separation of neuronal representations.
Bottom Line:
Inspired by experimental findings on network architecture in the olfactory system of the locust, I construct a highly simplified theoretical framework which allows for analytic solution of its key properties.For generalized feed-forward systems, I show that an intermediate range of connectivity values between source- and target-populations leads to a combinatorial explosion of wiring possibilities, resulting in input spaces which are, by their very nature, exquisitely sparsely populated.I suggest a straightforward way to construct ecologically meaningful representations from this code.
View Article:
PubMed Central - PubMed
Affiliation: Interdisciplinary Center for Neural Computation, Hebrew University Jerusalem, Israel.
ABSTRACT
One of the most basic and general tasks faced by all nervous systems is extracting relevant information from the organism's surrounding world. While physical signals available to sensory systems are often continuous, variable, overlapping, and noisy, high-level neuronal representations used for decision-making tend to be discrete, specific, invariant, and highly separable. This study addresses the question of how neuronal specificity is generated. Inspired by experimental findings on network architecture in the olfactory system of the locust, I construct a highly simplified theoretical framework which allows for analytic solution of its key properties. For generalized feed-forward systems, I show that an intermediate range of connectivity values between source- and target-populations leads to a combinatorial explosion of wiring possibilities, resulting in input spaces which are, by their very nature, exquisitely sparsely populated. In particular, connection probability ½, as found in the locust antennal-lobe-mushroom-body circuit, serves to maximize separation of neuronal representations across the target Kenyon cells (KCs), and explains their specific and reliable responses. This analysis yields a function expressing response specificity in terms of lower network parameters; together with appropriate gain control this leads to a simple neuronal algorithm for generating arbitrarily sparse and selective codes and linking network architecture and neural coding. I suggest a straightforward way to construct ecologically meaningful representations from this code. No MeSH data available. Related in: MedlinePlus |
Related In:
Results -
Collection
License getmorefigures.php?uid=PMC3539730&req=5
Mentions: Connectivity ½ thus maximizes differences between PN–KC connectivity vectors. I demonstrate this graphically in Figure 3 using elementary Venn diagrams. Two different KCs, each of which samples PNs randomly and independently with probability c, thus define two sets of PNs (I call these sets u and v). Each large (open) circle in Figure 3A represents the entire PN set (with area N), the two smaller circles within it mark the PN subsets u and v sampled by our two KCs (with average area N · c each; the value of c is indicated above each diagram). |
View Article: PubMed Central - PubMed
Affiliation: Interdisciplinary Center for Neural Computation, Hebrew University Jerusalem, Israel.
No MeSH data available.