Limits...
Network architecture underlying maximal separation of neuronal representations.

Jortner RA - Front Neuroeng (2013)

Bottom Line: Inspired by experimental findings on network architecture in the olfactory system of the locust, I construct a highly simplified theoretical framework which allows for analytic solution of its key properties.For generalized feed-forward systems, I show that an intermediate range of connectivity values between source- and target-populations leads to a combinatorial explosion of wiring possibilities, resulting in input spaces which are, by their very nature, exquisitely sparsely populated.I suggest a straightforward way to construct ecologically meaningful representations from this code.

View Article: PubMed Central - PubMed

Affiliation: Interdisciplinary Center for Neural Computation, Hebrew University Jerusalem, Israel.

ABSTRACT
One of the most basic and general tasks faced by all nervous systems is extracting relevant information from the organism's surrounding world. While physical signals available to sensory systems are often continuous, variable, overlapping, and noisy, high-level neuronal representations used for decision-making tend to be discrete, specific, invariant, and highly separable. This study addresses the question of how neuronal specificity is generated. Inspired by experimental findings on network architecture in the olfactory system of the locust, I construct a highly simplified theoretical framework which allows for analytic solution of its key properties. For generalized feed-forward systems, I show that an intermediate range of connectivity values between source- and target-populations leads to a combinatorial explosion of wiring possibilities, resulting in input spaces which are, by their very nature, exquisitely sparsely populated. In particular, connection probability ½, as found in the locust antennal-lobe-mushroom-body circuit, serves to maximize separation of neuronal representations across the target Kenyon cells (KCs), and explains their specific and reliable responses. This analysis yields a function expressing response specificity in terms of lower network parameters; together with appropriate gain control this leads to a simple neuronal algorithm for generating arbitrarily sparse and selective codes and linking network architecture and neural coding. I suggest a straightforward way to construct ecologically meaningful representations from this code.

No MeSH data available.


Related in: MedlinePlus

Framework for studying the separation of neuronal representations. (A) Circuit diagram of the locust olfactory system. Odor information reaches the antennal-lobe via ~90,000 olfactory receptor-neurons in the antenna. In the antennal-lobe, ~800 projection neurons (PNs, yellow) project it further to the mushroom body (onto ~50,000 Kenyon cells (KCs), blue; PN–KC connection probability ½) and the lateral horn (not shown). In transition from PNs to KCs the odor code dramatically changes from broad and highly distributed (in PNs) to sparse and specific (in KCs). KC axons split into the α- and β-lobes, where they synapse onto α- and β-lobe extrinsic neurons (green), respectively (KC–β-lobe-neuron connection probability ~0.02). Red arrows indicate direction of information flow. See text for more details. (B) Mathematical framework for studying transformation in coding. Model represents the state of a theoretical network inspired by PN–KC circuitry during a brief snapshot in time. Color code and information flow same as in (A). A set of N source-neurons (activity of which is denoted by binary-valued vector ; i.i.d. with probability p) projects onto a set of M target neurons (activity of which is denoted by vector ) via a set of feed-forward connections (binary-valued connectivity matrix ; i.i.d. with probability c). The aggregate input to the target layer  is the vector , a product of source-neuron activity vector  and connectivity matrix .  is obtained by thresholding  using the Heaviside function .
© Copyright Policy - open-access
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC3539730&req=5

Figure 1: Framework for studying the separation of neuronal representations. (A) Circuit diagram of the locust olfactory system. Odor information reaches the antennal-lobe via ~90,000 olfactory receptor-neurons in the antenna. In the antennal-lobe, ~800 projection neurons (PNs, yellow) project it further to the mushroom body (onto ~50,000 Kenyon cells (KCs), blue; PN–KC connection probability ½) and the lateral horn (not shown). In transition from PNs to KCs the odor code dramatically changes from broad and highly distributed (in PNs) to sparse and specific (in KCs). KC axons split into the α- and β-lobes, where they synapse onto α- and β-lobe extrinsic neurons (green), respectively (KC–β-lobe-neuron connection probability ~0.02). Red arrows indicate direction of information flow. See text for more details. (B) Mathematical framework for studying transformation in coding. Model represents the state of a theoretical network inspired by PN–KC circuitry during a brief snapshot in time. Color code and information flow same as in (A). A set of N source-neurons (activity of which is denoted by binary-valued vector ; i.i.d. with probability p) projects onto a set of M target neurons (activity of which is denoted by vector ) via a set of feed-forward connections (binary-valued connectivity matrix ; i.i.d. with probability c). The aggregate input to the target layer is the vector , a product of source-neuron activity vector and connectivity matrix . is obtained by thresholding using the Heaviside function .

Mentions: One example where detailed knowledge exists on network parameters and coding schemes is the olfactory system of the locust (Schistocerca americana) (Figure 1A). In this relatively simple system, 800 broadly tuned and noisy second-order neurons (projection neurons, PNs) project directly onto 50,000 third-order neurons (Kenyon cells, KCs), which are highly selective and reliable in their odor responses (Perez-Orive et al., 2002). As the system is feed-forward, small, well-defined, and displays a dramatic change in coding—from distributed to sparse—between source- and target-populations, it seems well suited for studying the origins of neuronal specificity.


Network architecture underlying maximal separation of neuronal representations.

Jortner RA - Front Neuroeng (2013)

Framework for studying the separation of neuronal representations. (A) Circuit diagram of the locust olfactory system. Odor information reaches the antennal-lobe via ~90,000 olfactory receptor-neurons in the antenna. In the antennal-lobe, ~800 projection neurons (PNs, yellow) project it further to the mushroom body (onto ~50,000 Kenyon cells (KCs), blue; PN–KC connection probability ½) and the lateral horn (not shown). In transition from PNs to KCs the odor code dramatically changes from broad and highly distributed (in PNs) to sparse and specific (in KCs). KC axons split into the α- and β-lobes, where they synapse onto α- and β-lobe extrinsic neurons (green), respectively (KC–β-lobe-neuron connection probability ~0.02). Red arrows indicate direction of information flow. See text for more details. (B) Mathematical framework for studying transformation in coding. Model represents the state of a theoretical network inspired by PN–KC circuitry during a brief snapshot in time. Color code and information flow same as in (A). A set of N source-neurons (activity of which is denoted by binary-valued vector ; i.i.d. with probability p) projects onto a set of M target neurons (activity of which is denoted by vector ) via a set of feed-forward connections (binary-valued connectivity matrix ; i.i.d. with probability c). The aggregate input to the target layer  is the vector , a product of source-neuron activity vector  and connectivity matrix .  is obtained by thresholding  using the Heaviside function .
© Copyright Policy - open-access
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC3539730&req=5

Figure 1: Framework for studying the separation of neuronal representations. (A) Circuit diagram of the locust olfactory system. Odor information reaches the antennal-lobe via ~90,000 olfactory receptor-neurons in the antenna. In the antennal-lobe, ~800 projection neurons (PNs, yellow) project it further to the mushroom body (onto ~50,000 Kenyon cells (KCs), blue; PN–KC connection probability ½) and the lateral horn (not shown). In transition from PNs to KCs the odor code dramatically changes from broad and highly distributed (in PNs) to sparse and specific (in KCs). KC axons split into the α- and β-lobes, where they synapse onto α- and β-lobe extrinsic neurons (green), respectively (KC–β-lobe-neuron connection probability ~0.02). Red arrows indicate direction of information flow. See text for more details. (B) Mathematical framework for studying transformation in coding. Model represents the state of a theoretical network inspired by PN–KC circuitry during a brief snapshot in time. Color code and information flow same as in (A). A set of N source-neurons (activity of which is denoted by binary-valued vector ; i.i.d. with probability p) projects onto a set of M target neurons (activity of which is denoted by vector ) via a set of feed-forward connections (binary-valued connectivity matrix ; i.i.d. with probability c). The aggregate input to the target layer is the vector , a product of source-neuron activity vector and connectivity matrix . is obtained by thresholding using the Heaviside function .
Mentions: One example where detailed knowledge exists on network parameters and coding schemes is the olfactory system of the locust (Schistocerca americana) (Figure 1A). In this relatively simple system, 800 broadly tuned and noisy second-order neurons (projection neurons, PNs) project directly onto 50,000 third-order neurons (Kenyon cells, KCs), which are highly selective and reliable in their odor responses (Perez-Orive et al., 2002). As the system is feed-forward, small, well-defined, and displays a dramatic change in coding—from distributed to sparse—between source- and target-populations, it seems well suited for studying the origins of neuronal specificity.

Bottom Line: Inspired by experimental findings on network architecture in the olfactory system of the locust, I construct a highly simplified theoretical framework which allows for analytic solution of its key properties.For generalized feed-forward systems, I show that an intermediate range of connectivity values between source- and target-populations leads to a combinatorial explosion of wiring possibilities, resulting in input spaces which are, by their very nature, exquisitely sparsely populated.I suggest a straightforward way to construct ecologically meaningful representations from this code.

View Article: PubMed Central - PubMed

Affiliation: Interdisciplinary Center for Neural Computation, Hebrew University Jerusalem, Israel.

ABSTRACT
One of the most basic and general tasks faced by all nervous systems is extracting relevant information from the organism's surrounding world. While physical signals available to sensory systems are often continuous, variable, overlapping, and noisy, high-level neuronal representations used for decision-making tend to be discrete, specific, invariant, and highly separable. This study addresses the question of how neuronal specificity is generated. Inspired by experimental findings on network architecture in the olfactory system of the locust, I construct a highly simplified theoretical framework which allows for analytic solution of its key properties. For generalized feed-forward systems, I show that an intermediate range of connectivity values between source- and target-populations leads to a combinatorial explosion of wiring possibilities, resulting in input spaces which are, by their very nature, exquisitely sparsely populated. In particular, connection probability ½, as found in the locust antennal-lobe-mushroom-body circuit, serves to maximize separation of neuronal representations across the target Kenyon cells (KCs), and explains their specific and reliable responses. This analysis yields a function expressing response specificity in terms of lower network parameters; together with appropriate gain control this leads to a simple neuronal algorithm for generating arbitrarily sparse and selective codes and linking network architecture and neural coding. I suggest a straightforward way to construct ecologically meaningful representations from this code.

No MeSH data available.


Related in: MedlinePlus