Limits...
A distributed reasoning engine ecosystem for semantic context-management in smart environments.

Almeida A, López-de-Ipiña D - Sensors (Basel) (2012)

Bottom Line: Ontologies have proven themselves to be one of the best tools to do it.In order to tackle this problem we have developed a mechanism to distribute the context reasoning problem into smaller parts in order to reduce the inference time.Finally we compare the distributed reasoning with the centralized one, analyzing in which situations is more suitable each approach.

View Article: PubMed Central - PubMed

Affiliation: Deusto Institute of Technology (DeustoTech), University of Deusto, Bilbao 48007, Spain. aitor.almeida@deusto.es

ABSTRACT
To be able to react adequately a smart environment must be aware of the context and its changes. Modeling the context allows applications to better understand it and to adapt to its changes. In order to do this an appropriate formal representation method is needed. Ontologies have proven themselves to be one of the best tools to do it. Semantic inference provides a powerful framework to reason over the context data. But there are some problems with this approach. The inference over semantic context information can be cumbersome when working with a large amount of data. This situation has become more common in modern smart environments where there are a lot sensors and devices available. In order to tackle this problem we have developed a mechanism to distribute the context reasoning problem into smaller parts in order to reduce the inference time. In this paper we describe a distributed peer-to-peer agent architecture of context consumers and context providers. We explain how this inference sharing process works, partitioning the context information according to the interests of the agents, location and a certainty factor. We also discuss the system architecture, analyzing the negotiation process between the agents. Finally we compare the distributed reasoning with the centralized one, analyzing in which situations is more suitable each approach.

Show MeSH
Comparison of inference times (in milliseconds) between the centralized and distributed approaches.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC3472824&req=5

f10-sensors-12-10208: Comparison of inference times (in milliseconds) between the centralized and distributed approaches.

Mentions: In Figure 10 it can be seen the differences between the centralized (Scenario A) and distributed (Scenario B, C and D) approaches. For a small number of context providers the centralized approach is much more efficient. The time gained parallelizing the inference process is minimal and much more time is lost due to the network latency. But as the number of Context Providers increases the distributed approach becomes more efficient. Even with 250 Context Providers the inference time for the distributed approach is under 2 s, while the inference time of the centralized approach is of 6,386 ms. This difference is even bigger with 300 Context Providers, where the centralized approach is five times slower.


A distributed reasoning engine ecosystem for semantic context-management in smart environments.

Almeida A, López-de-Ipiña D - Sensors (Basel) (2012)

Comparison of inference times (in milliseconds) between the centralized and distributed approaches.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC3472824&req=5

f10-sensors-12-10208: Comparison of inference times (in milliseconds) between the centralized and distributed approaches.
Mentions: In Figure 10 it can be seen the differences between the centralized (Scenario A) and distributed (Scenario B, C and D) approaches. For a small number of context providers the centralized approach is much more efficient. The time gained parallelizing the inference process is minimal and much more time is lost due to the network latency. But as the number of Context Providers increases the distributed approach becomes more efficient. Even with 250 Context Providers the inference time for the distributed approach is under 2 s, while the inference time of the centralized approach is of 6,386 ms. This difference is even bigger with 300 Context Providers, where the centralized approach is five times slower.

Bottom Line: Ontologies have proven themselves to be one of the best tools to do it.In order to tackle this problem we have developed a mechanism to distribute the context reasoning problem into smaller parts in order to reduce the inference time.Finally we compare the distributed reasoning with the centralized one, analyzing in which situations is more suitable each approach.

View Article: PubMed Central - PubMed

Affiliation: Deusto Institute of Technology (DeustoTech), University of Deusto, Bilbao 48007, Spain. aitor.almeida@deusto.es

ABSTRACT
To be able to react adequately a smart environment must be aware of the context and its changes. Modeling the context allows applications to better understand it and to adapt to its changes. In order to do this an appropriate formal representation method is needed. Ontologies have proven themselves to be one of the best tools to do it. Semantic inference provides a powerful framework to reason over the context data. But there are some problems with this approach. The inference over semantic context information can be cumbersome when working with a large amount of data. This situation has become more common in modern smart environments where there are a lot sensors and devices available. In order to tackle this problem we have developed a mechanism to distribute the context reasoning problem into smaller parts in order to reduce the inference time. In this paper we describe a distributed peer-to-peer agent architecture of context consumers and context providers. We explain how this inference sharing process works, partitioning the context information according to the interests of the agents, location and a certainty factor. We also discuss the system architecture, analyzing the negotiation process between the agents. Finally we compare the distributed reasoning with the centralized one, analyzing in which situations is more suitable each approach.

Show MeSH