2014 IEEE Geoscience and Remote Sensing Symposium 2014
DOI: 10.1109/igarss.2014.6946942
|View full text |Cite
|
Sign up to set email alerts
|

Ontology-based semantic classification of satellite images: Case of major disasters

Abstract: Abstract-The International Charter1 "Space and Major Disasters" is regularly activated during a catastrophic event and offers rescue team damage maps. Most of these maps are built by means of satellite image manual processing, which is often complex and demanding in terms of time and energy. Automatic processing offers prompt treatment; nevertheless it usually presents a semantic gap handicap. The exploitation of ontologies to bridge the semantic gap has been widely recommended due to their quality of knowledg… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
10
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
4
2

Relationship

1
5

Authors

Journals

citations
Cited by 11 publications
(10 citation statements)
references
References 3 publications
(3 reference statements)
0
10
0
Order By: Relevance
“…For future work, and as well as improving the ontology in terms of a comprehensive domain representation, we will be looking for its applications into other areas. Of particular interest is the ontology-based semantic annotation of satellite images [43], change detection, and response to queries related to emergency needs, such as the location of hospitals compared to affected areas, detecting operational roads, and location of the highest priority areas (schools, residential buildings, etc).…”
Section: Discussionmentioning
confidence: 99%
“…For future work, and as well as improving the ontology in terms of a comprehensive domain representation, we will be looking for its applications into other areas. Of particular interest is the ontology-based semantic annotation of satellite images [43], change detection, and response to queries related to emergency needs, such as the location of hospitals compared to affected areas, detecting operational roads, and location of the highest priority areas (schools, residential buildings, etc).…”
Section: Discussionmentioning
confidence: 99%
“…Our automatic annotation approach for change image pairs uses a visual semantic embedding model to match the images with the relevant keywords extracted from the publication corpus. When the semantic information about the classes present in EO images is inconsistent or lacking, different solutions have been proposed to extract that information from other sources such as ontologies [ 4 ] or geo-referenced Wikipedia articles [ 21 ]. Another solution is to use visual semantic embeddings by representing the images and text in the space vector space and learning the classes of the unlabeled images based on the similarity between the vector representations across the image and text modalities [ 14 ].…”
Section: Methodsmentioning
confidence: 99%
“…By integrating an ontology to the segmentation process of pre- and post-disaster images, authors in [ 4 ] showed that overall accuracy went from 67.9% to 89.4% for images of their test area. With a reduced number of samples (200), authors in [ 21 ] demonstrated that using Wikipedia annotations for the task of semantic segmentation, the Intersection-over-Union (IoU) score was 51.70% compared to 50.75% when pre-training on ImageNet.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations