Limits...
Measuring information-transfer delays.

Wibral M, Pampu N, Priesemann V, Siebenhühner F, Seiwert H, Lindner M, Lizier JT, Vicente R - PLoS ONE (2013)

Bottom Line: In complex networks such as gene networks, traffic systems or brain circuits it is important to understand how long it takes for the different parts of the network to effectively influence one another.We also show the ability of the extended transfer entropy to detect the presence of multiple delays, as well as feedback loops.While evaluated on neuroscience data, we expect the estimator to be useful in other fields dealing with network dynamics.

View Article: PubMed Central - PubMed

Affiliation: MEG Unit, Brain Imaging Center, Goethe University, Frankfurt, Germany. wibral@em.uni-frankfurt.de

ABSTRACT
In complex networks such as gene networks, traffic systems or brain circuits it is important to understand how long it takes for the different parts of the network to effectively influence one another. In the brain, for example, axonal delays between brain areas can amount to several tens of milliseconds, adding an intrinsic component to any timing-based processing of information. Inferring neural interaction delays is thus needed to interpret the information transfer revealed by any analysis of directed interactions across brain structures. However, a robust estimation of interaction delays from neural activity faces several challenges if modeling assumptions on interaction mechanisms are wrong or cannot be made. Here, we propose a robust estimator for neuronal interaction delays rooted in an information-theoretic framework, which allows a model-free exploration of interactions. In particular, we extend transfer entropy to account for delayed source-target interactions, while crucially retaining the conditioning on the embedded target state at the immediately previous time step. We prove that this particular extension is indeed guaranteed to identify interaction delays between two coupled systems and is the only relevant option in keeping with Wiener's principle of causality. We demonstrate the performance of our approach in detecting interaction delays on finite data by numerical simulations of stochastic and deterministic processes, as well as on local field potential recordings. We also show the ability of the extended transfer entropy to detect the presence of multiple delays, as well as feedback loops. While evaluated on neuroscience data, we expect the estimator to be useful in other fields dealing with network dynamics.

Show MeSH

Related in: MedlinePlus

Illustration of the main ideas behind interaction delay reconstruction using the TESPO estimator.(A) Scalar time courses of processes  coupled  with delay , as indicated by the blue arrow. Colored boxes with circles indicate data belonging to a certain state of the respective process. The star on the  time series indicates the scalar observation  to be predicted in Wiener’s sense. Three settings for the delay parameter  are depicted: (1)  with – u is chosen such that influences of the state  on  arrive in the future of the prediction point. Hence, the information in this state is useless and yields no transfer entropy. (2)  – u is chosen such that influences of the state  arrive exactly at the prediction point, and influence it. Information about this state is useful, and we obtain nonzero transfer entropy. (3)  – u is chosen such that influences of the state  arrive in the far past of prediction point. This information is already available in the past of the states of  that we condition upon in  Information about this state is useless again, and we obtain zero transfer entropy. (B) Depiction of the same idea in a more detailed view, depicting states (gray boxes) of  and the samples of the most informative state (black circles) and noninformative states (white circles). The the curve in the left column indicates the approximate dependency of  versus . The red circles indicates the value obtained with the respectzive states on the right.
© Copyright Policy
Related In: Results  -  Collection


getmorefigures.php?uid=PMC3585400&req=5

pone-0055809-g001: Illustration of the main ideas behind interaction delay reconstruction using the TESPO estimator.(A) Scalar time courses of processes coupled with delay , as indicated by the blue arrow. Colored boxes with circles indicate data belonging to a certain state of the respective process. The star on the time series indicates the scalar observation to be predicted in Wiener’s sense. Three settings for the delay parameter are depicted: (1) with – u is chosen such that influences of the state on arrive in the future of the prediction point. Hence, the information in this state is useless and yields no transfer entropy. (2) – u is chosen such that influences of the state arrive exactly at the prediction point, and influence it. Information about this state is useful, and we obtain nonzero transfer entropy. (3) – u is chosen such that influences of the state arrive in the far past of prediction point. This information is already available in the past of the states of that we condition upon in Information about this state is useless again, and we obtain zero transfer entropy. (B) Depiction of the same idea in a more detailed view, depicting states (gray boxes) of and the samples of the most informative state (black circles) and noninformative states (white circles). The the curve in the left column indicates the approximate dependency of versus . The red circles indicates the value obtained with the respectzive states on the right.

Mentions: The main ideas behind delay reconstruction via maximizing are illustrated in Figure 1. By scanning the delay parameter we shift the considered state of the source process in time. If this state is in the relative future of the observation to be predicted for , i.e. , its influence has not arrived at yet. As a consequence, the state is uninformative and we get low . If the state has a time delay , such that the influence arrives exactly at , then is maximal. If the state has too long a delay, then its influence has arrived before and is already taken into account via conditioning on the past state ; again we obtain low . In the following we will present our proof. Since it is of a technical nature the reader may safely skip ahead if not interested in this material.


Measuring information-transfer delays.

Wibral M, Pampu N, Priesemann V, Siebenhühner F, Seiwert H, Lindner M, Lizier JT, Vicente R - PLoS ONE (2013)

Illustration of the main ideas behind interaction delay reconstruction using the TESPO estimator.(A) Scalar time courses of processes  coupled  with delay , as indicated by the blue arrow. Colored boxes with circles indicate data belonging to a certain state of the respective process. The star on the  time series indicates the scalar observation  to be predicted in Wiener’s sense. Three settings for the delay parameter  are depicted: (1)  with – u is chosen such that influences of the state  on  arrive in the future of the prediction point. Hence, the information in this state is useless and yields no transfer entropy. (2)  – u is chosen such that influences of the state  arrive exactly at the prediction point, and influence it. Information about this state is useful, and we obtain nonzero transfer entropy. (3)  – u is chosen such that influences of the state  arrive in the far past of prediction point. This information is already available in the past of the states of  that we condition upon in  Information about this state is useless again, and we obtain zero transfer entropy. (B) Depiction of the same idea in a more detailed view, depicting states (gray boxes) of  and the samples of the most informative state (black circles) and noninformative states (white circles). The the curve in the left column indicates the approximate dependency of  versus . The red circles indicates the value obtained with the respectzive states on the right.
© Copyright Policy
Related In: Results  -  Collection

Show All Figures
getmorefigures.php?uid=PMC3585400&req=5

pone-0055809-g001: Illustration of the main ideas behind interaction delay reconstruction using the TESPO estimator.(A) Scalar time courses of processes coupled with delay , as indicated by the blue arrow. Colored boxes with circles indicate data belonging to a certain state of the respective process. The star on the time series indicates the scalar observation to be predicted in Wiener’s sense. Three settings for the delay parameter are depicted: (1) with – u is chosen such that influences of the state on arrive in the future of the prediction point. Hence, the information in this state is useless and yields no transfer entropy. (2) – u is chosen such that influences of the state arrive exactly at the prediction point, and influence it. Information about this state is useful, and we obtain nonzero transfer entropy. (3) – u is chosen such that influences of the state arrive in the far past of prediction point. This information is already available in the past of the states of that we condition upon in Information about this state is useless again, and we obtain zero transfer entropy. (B) Depiction of the same idea in a more detailed view, depicting states (gray boxes) of and the samples of the most informative state (black circles) and noninformative states (white circles). The the curve in the left column indicates the approximate dependency of versus . The red circles indicates the value obtained with the respectzive states on the right.
Mentions: The main ideas behind delay reconstruction via maximizing are illustrated in Figure 1. By scanning the delay parameter we shift the considered state of the source process in time. If this state is in the relative future of the observation to be predicted for , i.e. , its influence has not arrived at yet. As a consequence, the state is uninformative and we get low . If the state has a time delay , such that the influence arrives exactly at , then is maximal. If the state has too long a delay, then its influence has arrived before and is already taken into account via conditioning on the past state ; again we obtain low . In the following we will present our proof. Since it is of a technical nature the reader may safely skip ahead if not interested in this material.

Bottom Line: In complex networks such as gene networks, traffic systems or brain circuits it is important to understand how long it takes for the different parts of the network to effectively influence one another.We also show the ability of the extended transfer entropy to detect the presence of multiple delays, as well as feedback loops.While evaluated on neuroscience data, we expect the estimator to be useful in other fields dealing with network dynamics.

View Article: PubMed Central - PubMed

Affiliation: MEG Unit, Brain Imaging Center, Goethe University, Frankfurt, Germany. wibral@em.uni-frankfurt.de

ABSTRACT
In complex networks such as gene networks, traffic systems or brain circuits it is important to understand how long it takes for the different parts of the network to effectively influence one another. In the brain, for example, axonal delays between brain areas can amount to several tens of milliseconds, adding an intrinsic component to any timing-based processing of information. Inferring neural interaction delays is thus needed to interpret the information transfer revealed by any analysis of directed interactions across brain structures. However, a robust estimation of interaction delays from neural activity faces several challenges if modeling assumptions on interaction mechanisms are wrong or cannot be made. Here, we propose a robust estimator for neuronal interaction delays rooted in an information-theoretic framework, which allows a model-free exploration of interactions. In particular, we extend transfer entropy to account for delayed source-target interactions, while crucially retaining the conditioning on the embedded target state at the immediately previous time step. We prove that this particular extension is indeed guaranteed to identify interaction delays between two coupled systems and is the only relevant option in keeping with Wiener's principle of causality. We demonstrate the performance of our approach in detecting interaction delays on finite data by numerical simulations of stochastic and deterministic processes, as well as on local field potential recordings. We also show the ability of the extended transfer entropy to detect the presence of multiple delays, as well as feedback loops. While evaluated on neuroscience data, we expect the estimator to be useful in other fields dealing with network dynamics.

Show MeSH
Related in: MedlinePlus