2023
DOI: 10.21203/rs.3.rs-2867156/v1
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

 Virtual Assistant and Navigation for Visually Impaired using Deep Neural Network and Image Processing

Abstract: The quick development of technology promotes the use of the resources that are at hand to make simpler daily tasks and improve the standard and quality of life for those who are blind. This module suggests creating a system of virtual assistant glasses to help the visually impaired navigate around them. An obstacle detection module built into the device uses computer vision to find obstacles and inform the user through haptic feedback. The system also has a text recognition module that can turn any text it ide… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 16 publications
0
2
0
Order By: Relevance
“…They used contextual information such as data types, captions, matching sentences, etc., and devised a keyboard-based navigational menu for interaction. Bukhaya et al [22], Nair et al [23], and Christopherson et al [24] leveraged image processing techniques via deep learning to convert the visual information into text for subsequent processing by a text reader. Tucket et al [25] embedded Near Field Connectivity (NFC) in academic pages preloaded with the speak command.…”
Section: Bu State-of-the-art Toolsmentioning
confidence: 99%
See 1 more Smart Citation
“…They used contextual information such as data types, captions, matching sentences, etc., and devised a keyboard-based navigational menu for interaction. Bukhaya et al [22], Nair et al [23], and Christopherson et al [24] leveraged image processing techniques via deep learning to convert the visual information into text for subsequent processing by a text reader. Tucket et al [25] embedded Near Field Connectivity (NFC) in academic pages preloaded with the speak command.…”
Section: Bu State-of-the-art Toolsmentioning
confidence: 99%
“…Approach Limitations [10] BUs face challenges due to lack of navigational support Reliance on third-party assistive tools [11] Utilization of screen readers, JAWS, voice assistants Limited effectiveness of assistive tools [20] Voice-activated email prototype Focused on a specific application (email) [21] Reduce cognitive load using summarized information Limited to information presented in tabular format [22][23][24] Leverage image processing to convert visual information Relies on image recognition; may not cover all content [25] Embed Near Field Connectivity (NFC) for interaction Limited to specific contexts (academic pages) [26] Pigeon algorithm for efficient web page retrieval Focused on improving search result relevancy [27] Image processing for automatic graph description Limited to content with graphical elements [28] Voice assistance augmentation for BUs Primarily extends voice assistant functionality…”
Section: Citedmentioning
confidence: 99%