Limits...
A Geospatial Semantic Enrichment and Query Service for Geotagged Photographs.

Ennis A, Nugent C, Morrow P, Chen L, Ioannidis G, Stan A, Rachev P - Sensors (Basel) (2015)

Bottom Line: Nevertheless, it still remains a challenge to discover the "right" information for the appropriate purpose.To achieve this we have developed and implemented a semantic geospatial data model by which a photograph can be enrich with geospatial metadata extracted from several geospatial data sources based on the raw low-level geo-metadata from a smartphone photograph.We present the details of our method and implementation for searching and querying the semantic geospatial metadata repository to enable a user or third party system to find the information they are looking for.

View Article: PubMed Central - PubMed

Affiliation: School of Computing and Mathematics, University of Ulster, Coleraine BT370QB, UK. ennis-a1@email.ulster.ac.uk.

ABSTRACT
With the increasing abundance of technologies and smart devices, equipped with a multitude of sensors for sensing the environment around them, information creation and consumption has now become effortless. This, in particular, is the case for photographs with vast amounts being created and shared every day. For example, at the time of this writing, Instagram users upload 70 million photographs a day. Nevertheless, it still remains a challenge to discover the "right" information for the appropriate purpose. This paper describes an approach to create semantic geospatial metadata for photographs, which can facilitate photograph search and discovery. To achieve this we have developed and implemented a semantic geospatial data model by which a photograph can be enrich with geospatial metadata extracted from several geospatial data sources based on the raw low-level geo-metadata from a smartphone photograph. We present the details of our method and implementation for searching and querying the semantic geospatial metadata repository to enable a user or third party system to find the information they are looking for.

No MeSH data available.


Example SWRL rule used by MediaPlace to determine what POIs the photograph is looking in the direction of.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4541944&req=5

sensors-15-17470-f004: Example SWRL rule used by MediaPlace to determine what POIs the photograph is looking in the direction of.

Mentions: SWRL rules are applied to the semantic geospatial data repository for collected photographs to infer further information and relationships, in particular that of the POIs of the photograph. Due to the complexity of geospatial calculations several custom functions were developed, called built-ins, for example to calculate compass bearing between two GPS points, distance between two GPS coordinates and POI name similarity. These built-ins then supply values back to the rule, which can be used for comparisons or in the result of the rule. An example is shown in Figure 4, where one of the rules is to calculate if the POI is in the direction the photograph was taken. This coupled with the distance from the photograph, means that when the system queries the semantic geospatial metadata repository to determine what the photograph is looking at, these values are already calculated and so the query is much more simplified and also computationally simpler. The result from the rules gets added back into the semantic geospatial metadata repository and so is adding additional semantic context to the photograph, such as how far away the POIs are and in what direction, etc.


A Geospatial Semantic Enrichment and Query Service for Geotagged Photographs.

Ennis A, Nugent C, Morrow P, Chen L, Ioannidis G, Stan A, Rachev P - Sensors (Basel) (2015)

Example SWRL rule used by MediaPlace to determine what POIs the photograph is looking in the direction of.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4541944&req=5

sensors-15-17470-f004: Example SWRL rule used by MediaPlace to determine what POIs the photograph is looking in the direction of.
Mentions: SWRL rules are applied to the semantic geospatial data repository for collected photographs to infer further information and relationships, in particular that of the POIs of the photograph. Due to the complexity of geospatial calculations several custom functions were developed, called built-ins, for example to calculate compass bearing between two GPS points, distance between two GPS coordinates and POI name similarity. These built-ins then supply values back to the rule, which can be used for comparisons or in the result of the rule. An example is shown in Figure 4, where one of the rules is to calculate if the POI is in the direction the photograph was taken. This coupled with the distance from the photograph, means that when the system queries the semantic geospatial metadata repository to determine what the photograph is looking at, these values are already calculated and so the query is much more simplified and also computationally simpler. The result from the rules gets added back into the semantic geospatial metadata repository and so is adding additional semantic context to the photograph, such as how far away the POIs are and in what direction, etc.

Bottom Line: Nevertheless, it still remains a challenge to discover the "right" information for the appropriate purpose.To achieve this we have developed and implemented a semantic geospatial data model by which a photograph can be enrich with geospatial metadata extracted from several geospatial data sources based on the raw low-level geo-metadata from a smartphone photograph.We present the details of our method and implementation for searching and querying the semantic geospatial metadata repository to enable a user or third party system to find the information they are looking for.

View Article: PubMed Central - PubMed

Affiliation: School of Computing and Mathematics, University of Ulster, Coleraine BT370QB, UK. ennis-a1@email.ulster.ac.uk.

ABSTRACT
With the increasing abundance of technologies and smart devices, equipped with a multitude of sensors for sensing the environment around them, information creation and consumption has now become effortless. This, in particular, is the case for photographs with vast amounts being created and shared every day. For example, at the time of this writing, Instagram users upload 70 million photographs a day. Nevertheless, it still remains a challenge to discover the "right" information for the appropriate purpose. This paper describes an approach to create semantic geospatial metadata for photographs, which can facilitate photograph search and discovery. To achieve this we have developed and implemented a semantic geospatial data model by which a photograph can be enrich with geospatial metadata extracted from several geospatial data sources based on the raw low-level geo-metadata from a smartphone photograph. We present the details of our method and implementation for searching and querying the semantic geospatial metadata repository to enable a user or third party system to find the information they are looking for.

No MeSH data available.