26th International Conference on Intelligent User Interfaces 2021
DOI: 10.1145/3397481.3450664
|View full text |Cite
|
Sign up to set email alerts
|

GO-Finder: A Registration-Free Wearable System for Assisting Users in Finding Lost Objects via Hand-Held Object Discovery

Abstract: People spend an enormous amount of time and effort looking for lost objects. To help remind people of the location of lost objects, various computational systems that provide information on their locations have been developed. However, prior systems for assisting people in finding objects require users to register the target objects in advance. This requirement imposes a cumbersome burden on the users, and the system cannot help remind them of unexpectedly lost objects. We propose GO-Finder ("Generic Object Fi… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 8 publications
(4 citation statements)
references
References 28 publications
0
4
0
Order By: Relevance
“…The system associates static images of objects with the individuals who might possess them, providing users with an alternative method for locating items. Similarly, Yagi et al [35] proposed GO-Finder, a system that automatically identifies and logs items during the object placement process. When users need to find these items later, the system provides a timeline of last appearance images to aid in object retrieval tasks.…”
Section: Related Work 21 Search Assistancementioning
confidence: 99%
See 1 more Smart Citation
“…The system associates static images of objects with the individuals who might possess them, providing users with an alternative method for locating items. Similarly, Yagi et al [35] proposed GO-Finder, a system that automatically identifies and logs items during the object placement process. When users need to find these items later, the system provides a timeline of last appearance images to aid in object retrieval tasks.…”
Section: Related Work 21 Search Assistancementioning
confidence: 99%
“…In this study, we evaluate the performance of three types of visual cues in object retrieval tasks: (i) last frame (LF): static image of object last appearance; (ii) video: the normal speed video playback starting from DA starting point; and (iii) video-LF: video playback starting from DA at normal speed and consistently displaying the last frame after completion of the playback. Numerous studies have confirmed the effectiveness of images of objects' last appearance in object retrieval tasks [33,35]. Inspired by these studies on extracting static images of objects' last appearance to serve as visual cues, we utilized the static image of the object's last appearance in the video playback as a visual cue, which was last frame of video playback (LF).…”
Section: User Studymentioning
confidence: 99%
“…The personal items tracking system is a large-scale innovation that needs improvements. The paper by [6] described another way to accomplish how a tracking system works with a camera-based hand-held system. Moreover, the researchers could obtain Bluetooth communication through various topics such as object tracking, child monitoring, and location detection [7][8][9].…”
Section: Introductionmentioning
confidence: 99%
“…Detecting the positions of a person's hands and an objectin-contact (hand-object detection) from an image provides an important clue for understanding how the person interacts with the physical world. This hand-object detection is applicable to recognizing a person's primitive actions, such as "taking" or "pushing", and logging the person's activity of interacting with the environment [1]. Shan et al [2] built a hand-object detector for localizing hands and interacting objects on a large-scale dataset collected in naturalistic house-holding situations, such as in kitchen [3,4,5], DIY [2], and craft work [2,5].…”
Section: Introductionmentioning
confidence: 99%