Proceedings of the Third Annual Workshop on Lifelog Search Challenge 2020
DOI: 10.1145/3379172.3391723
|View full text |Cite
|
Sign up to set email alerts
|

Multimodal Retrieval through Relations between Subjects and Objects in Lifelog Images

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
13
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
3
1

Relationship

1
8

Authors

Journals

citations
Cited by 15 publications
(13 citation statements)
references
References 4 publications
0
13
0
Order By: Relevance
“…FIRST [25] uses an autoencoder like approach to map query text and images into a common semantic space to measure the similarity between them, LifeGraph [23] used a knowledge graph to represent the lifelog data to capture the internal relations of the various data modalities and linked it to external static data sources for better semantic understanding. Chu et al [6] extracted relation graphs from lifelog images to better describe the relationship between entities (subject-object) present within the image.…”
Section: Related Workmentioning
confidence: 99%
“…FIRST [25] uses an autoencoder like approach to map query text and images into a common semantic space to measure the similarity between them, LifeGraph [23] used a knowledge graph to represent the lifelog data to capture the internal relations of the various data modalities and linked it to external static data sources for better semantic understanding. Chu et al [6] extracted relation graphs from lifelog images to better describe the relationship between entities (subject-object) present within the image.…”
Section: Related Workmentioning
confidence: 99%
“…Most of the systems offer querying by location. For our work the systems Myscéal [29], LifeSeeker [14], and the work of Chu et al [3] are particularly interesting, since they provide a visualization of spatial context with the help of a map in various different ways:…”
Section: Related Workmentioning
confidence: 99%
“…For this experiment, we used the LSC'18/19 dataset [14], and we created a new set of twenty semantic queries, including ten randomly chosen topics from the LSC'19 dataset (representing conventional lifelog queries) and ten manually created topics that focus on visually describing a known-item from a lifelog. It is noticeable that [6] also followed the concept of using a scene graph for lifelogging visual data. However, this system used such a graph as a supplement to the retrieval process and did not consider a query as a graph like our proposed method.…”
Section: Related Workmentioning
confidence: 99%