2019
DOI: 10.3169/mta.7.46
|View full text |Cite
|
Sign up to set email alerts
|

[Invited papers] Comparing Approaches to Interactive Lifelog Search at the Lifelog Search Challenge (LSC2018)

Abstract: The Lifelog Search Challenge (LSC) is an international content retrieval competition that evaluates search for personal lifelog data. At the LSC, content-based search is performed over a multi-modal dataset, continuously recorded by a lifelogger over 27 days, consisting of multimedia content, biometric data, human activity data, and information activities data. In this work, we report on the first LSC that took place in Yokohama, Japan in 2018 as a special workshop at ACM International Conference on Multimedia… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
63
0

Year Published

2019
2019
2019
2019

Publication Types

Select...
5
1

Relationship

4
2

Authors

Journals

citations
Cited by 77 publications
(64 citation statements)
references
References 23 publications
1
63
0
Order By: Relevance
“…The last part of the tutorial focuses on observed results at the Video Browser Showdown [6,7] and Lifelog Search Challenge [3]. For both competitions, we present evaluated tasks, metrics, scoring and constraints.…”
Section: Recent Results and Future Directionsmentioning
confidence: 99%
See 1 more Smart Citation
“…The last part of the tutorial focuses on observed results at the Video Browser Showdown [6,7] and Lifelog Search Challenge [3]. For both competitions, we present evaluated tasks, metrics, scoring and constraints.…”
Section: Recent Results and Future Directionsmentioning
confidence: 99%
“…The third part will provide an overview of existing evaluation campaigns, such as the VBS [6,7], the LSC [3] or TRECVID [1], outline their tasks, goals, commonalities and differences and discuss their evaluation strategies. The choice of evaluation strategies is not only influenced by aspects such as repeatability and the reuse of assessments, but also impacted by the setting of the evaluation campaign, i.e., whether the competition is live in front of the audience (as e.g.…”
Section: Evaluation Campaignsmentioning
confidence: 99%
“…In this context, a known-item search task refers to a scenario where the participant is provided with a specific description of an item, for example an image, and are then asked to retrieve it using the designated lifelog system. This style of task was chosen as it is the most commonly used method to evaluate lifelog retrieval [11] and does not necessitate the user being the owner of the lifelog dataset. Further details regarding our precise experiment configuration and the topics used for our known-item search tasks used are outlined in a parallel work [5] which is outside the scope of this paper to describe.…”
Section: Discussionmentioning
confidence: 99%
“…More recently, we note the introduction of a new challenge, specifically aimed at comparing approaches to interactive retrieval from lifelog archives. The Lifelog Search Challenge (LSC) [6] utilises a similar dataset [5] to the one used for the NTCIR14-Lifelog task. The LSC has occurred in 2018 and 2019 and attracted significant interest from participants.…”
Section: Related Interactive Lifelog Retrieval Systemsmentioning
confidence: 99%
“…These changes were combined with a slightly revised interface to take in to account the richer metadata and the content similarity functionality, as shown in Figures 4,5,and 6. In the interactive search competition at LSC2019, this system performed among the top-ranked teams with an overall score of 68, compared to the vitrivr system [19] which was given a score of 100. Interestingly the system significantly closed the gap to the NTCIR-14 system from HCMUS (which also competed at the LSC in 2019) who scored 72 in the competition.…”
Section: User Feedbackmentioning
confidence: 99%