2017 IEEE International Conference on Computer Vision Workshops (ICCVW) 2017
DOI: 10.1109/iccvw.2017.177
|View full text |Cite
|
Sign up to set email alerts
|

A Wearable Assistive Technology for the Visually Impaired with Door Knob Detection and Real-Time Feedback for Hand-to-Handle Manipulation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
21
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
1

Relationship

2
4

Authors

Journals

citations
Cited by 20 publications
(21 citation statements)
references
References 13 publications
0
21
0
Order By: Relevance
“…A direct comparison with systems already reported in the literature targeting the assistance of the visually impaired is not evident since most of them miss to report their mAPs [29,30,32]. Other object recognition works [44,68,69] for different applications report comparable mAPs between 70% and 90% exploring other deep learning approaches, such as SSD (Single Shot MultiBox Detector), YOLO (You Only Look Once), and R-FCN (Region-based Fully Convolutional Networks) with the use of other image databases for training, such as the PASCAL VOC (Visual Object Classes), SUN (Scene UNderstanding), and the KITTI (Karlsruhe Institute of Technology and Toyota Technological Institute) one.…”
Section: Results Discussionmentioning
confidence: 93%
See 1 more Smart Citation
“…A direct comparison with systems already reported in the literature targeting the assistance of the visually impaired is not evident since most of them miss to report their mAPs [29,30,32]. Other object recognition works [44,68,69] for different applications report comparable mAPs between 70% and 90% exploring other deep learning approaches, such as SSD (Single Shot MultiBox Detector), YOLO (You Only Look Once), and R-FCN (Region-based Fully Convolutional Networks) with the use of other image databases for training, such as the PASCAL VOC (Visual Object Classes), SUN (Scene UNderstanding), and the KITTI (Karlsruhe Institute of Technology and Toyota Technological Institute) one.…”
Section: Results Discussionmentioning
confidence: 93%
“…Few systems have targeted VI people needs; Niu et al described, in [29], a wearable system that detects doorknobs and human hands to help the blind people locate and use doors. Panchal et al presented, in [30], a new approach for recognizing text from scene images and to convert it into speech so that it can assist VI people.…”
Section: State Of the Art On Innovative Assistive Technology Devices mentioning
confidence: 99%
“…A significant component of what supports spatiocognitive activities performed without vision is touch. Obvious applications of touch to environmental awareness and spatial cognition include detecting and identifying objects and localizing key features in the environment (eg, doorknobs, railings, and so on) 9 . However, touch is also an important input for directly supporting safe and efficient navigation.…”
Section: Perspectivementioning
confidence: 99%
“…VIS 4 ION remedies some of the cane's shortcomings, and further augments the ability of BVI persons to both maintain balance and to localize objects in their environment 23,24 . The system also provides robust networked features, which expands computational power through connectivity 9,25‐28 …”
Section: Perspectivementioning
confidence: 99%
“…These components can be seamlessly integrated into wearables that feature both haptic and audio outputs (human-machine interfaces), such as bone conduction headsets with synthetic speech outputs or vibrating belts or bracelets, such as the VIS 4 ION system (Visually Impaired Smart Service System for Spatial Intelligence and Onboard Navigation). This platform provides real-time situational and obstacle awareness in one's immediate environment, allowing individuals to travel more safely in three-dimensional (3D) space and pays particular attention to low-body, mid-body, and highbody/head hazards [59][60][61][62].…”
Section: Englandmentioning
confidence: 99%