Limits...
A distributed reasoning engine ecosystem for semantic context-management in smart environments.

Almeida A, López-de-Ipiña D - Sensors (Basel) (2012)

Bottom Line: Ontologies have proven themselves to be one of the best tools to do it.In order to tackle this problem we have developed a mechanism to distribute the context reasoning problem into smaller parts in order to reduce the inference time.Finally we compare the distributed reasoning with the centralized one, analyzing in which situations is more suitable each approach.

View Article: PubMed Central - PubMed

Affiliation: Deusto Institute of Technology (DeustoTech), University of Deusto, Bilbao 48007, Spain. aitor.almeida@deusto.es

ABSTRACT
To be able to react adequately a smart environment must be aware of the context and its changes. Modeling the context allows applications to better understand it and to adapt to its changes. In order to do this an appropriate formal representation method is needed. Ontologies have proven themselves to be one of the best tools to do it. Semantic inference provides a powerful framework to reason over the context data. But there are some problems with this approach. The inference over semantic context information can be cumbersome when working with a large amount of data. This situation has become more common in modern smart environments where there are a lot sensors and devices available. In order to tackle this problem we have developed a mechanism to distribute the context reasoning problem into smaller parts in order to reduce the inference time. In this paper we describe a distributed peer-to-peer agent architecture of context consumers and context providers. We explain how this inference sharing process works, partitioning the context information according to the interests of the agents, location and a certainty factor. We also discuss the system architecture, analyzing the negotiation process between the agents. Finally we compare the distributed reasoning with the centralized one, analyzing in which situations is more suitable each approach.

Show MeSH
Scenarios for the evaluation of the system. First one (A) shows a centralized approach where a single inference engine processes all the data from the context providers; The second one (B) shows a distributed approach where three inference engines process the data from the context providers; The third one (C) is a completely serialized approach where the outcome of the previous reasoner is the input of the next one; Finally, the fourth scenario (D) shows a completely parallelized inference process.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC3472824&req=5

f8-sensors-12-10208: Scenarios for the evaluation of the system. First one (A) shows a centralized approach where a single inference engine processes all the data from the context providers; The second one (B) shows a distributed approach where three inference engines process the data from the context providers; The third one (C) is a completely serialized approach where the outcome of the previous reasoner is the input of the next one; Finally, the fourth scenario (D) shows a completely parallelized inference process.

Mentions: To evaluate the developed system we have created four scenarios. In the first one a centralized inference engine processes all the data from the context providers (see Figure 8(A)). In the second one a distributed architecture of inference engines is used (see Figure 8(B)). This scenario has three inference engines (Context Consumer A, B and C). Some of the inference have been parallelized (Context Consumer A and B), while part of it can only be performed after some of the data has been processed (Context Consumer C). The context data has been distributed between the three Context Consumers. Context Consumer A will process the 40% of the Context Providers, Context Consumer B will process the 50% and Context Consumer C will take the final 10% of the Context Providers and the inferred facts from Context Consumer A and B. In the third scenario (see Figure 8(C)) the inference process is done in three serialized Context Consumers. The context data has been distributed between the three of them. Context Consumer A takes the 40% of the data, Context Consumer B takes the inferred facts from A and another 10% of the context data and finally Context Consumer C will process B's inferred facts and the other 50% of the context data. In the fourth and final scenario (see Figure 8(D)) Context Consumers process the context data at the same time in a completely parallel architecture. The data is divided in the same way it was divided in the third scenario.


A distributed reasoning engine ecosystem for semantic context-management in smart environments.

Almeida A, López-de-Ipiña D - Sensors (Basel) (2012)

Scenarios for the evaluation of the system. First one (A) shows a centralized approach where a single inference engine processes all the data from the context providers; The second one (B) shows a distributed approach where three inference engines process the data from the context providers; The third one (C) is a completely serialized approach where the outcome of the previous reasoner is the input of the next one; Finally, the fourth scenario (D) shows a completely parallelized inference process.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC3472824&req=5

f8-sensors-12-10208: Scenarios for the evaluation of the system. First one (A) shows a centralized approach where a single inference engine processes all the data from the context providers; The second one (B) shows a distributed approach where three inference engines process the data from the context providers; The third one (C) is a completely serialized approach where the outcome of the previous reasoner is the input of the next one; Finally, the fourth scenario (D) shows a completely parallelized inference process.
Mentions: To evaluate the developed system we have created four scenarios. In the first one a centralized inference engine processes all the data from the context providers (see Figure 8(A)). In the second one a distributed architecture of inference engines is used (see Figure 8(B)). This scenario has three inference engines (Context Consumer A, B and C). Some of the inference have been parallelized (Context Consumer A and B), while part of it can only be performed after some of the data has been processed (Context Consumer C). The context data has been distributed between the three Context Consumers. Context Consumer A will process the 40% of the Context Providers, Context Consumer B will process the 50% and Context Consumer C will take the final 10% of the Context Providers and the inferred facts from Context Consumer A and B. In the third scenario (see Figure 8(C)) the inference process is done in three serialized Context Consumers. The context data has been distributed between the three of them. Context Consumer A takes the 40% of the data, Context Consumer B takes the inferred facts from A and another 10% of the context data and finally Context Consumer C will process B's inferred facts and the other 50% of the context data. In the fourth and final scenario (see Figure 8(D)) Context Consumers process the context data at the same time in a completely parallel architecture. The data is divided in the same way it was divided in the third scenario.

Bottom Line: Ontologies have proven themselves to be one of the best tools to do it.In order to tackle this problem we have developed a mechanism to distribute the context reasoning problem into smaller parts in order to reduce the inference time.Finally we compare the distributed reasoning with the centralized one, analyzing in which situations is more suitable each approach.

View Article: PubMed Central - PubMed

Affiliation: Deusto Institute of Technology (DeustoTech), University of Deusto, Bilbao 48007, Spain. aitor.almeida@deusto.es

ABSTRACT
To be able to react adequately a smart environment must be aware of the context and its changes. Modeling the context allows applications to better understand it and to adapt to its changes. In order to do this an appropriate formal representation method is needed. Ontologies have proven themselves to be one of the best tools to do it. Semantic inference provides a powerful framework to reason over the context data. But there are some problems with this approach. The inference over semantic context information can be cumbersome when working with a large amount of data. This situation has become more common in modern smart environments where there are a lot sensors and devices available. In order to tackle this problem we have developed a mechanism to distribute the context reasoning problem into smaller parts in order to reduce the inference time. In this paper we describe a distributed peer-to-peer agent architecture of context consumers and context providers. We explain how this inference sharing process works, partitioning the context information according to the interests of the agents, location and a certainty factor. We also discuss the system architecture, analyzing the negotiation process between the agents. Finally we compare the distributed reasoning with the centralized one, analyzing in which situations is more suitable each approach.

Show MeSH