2015 International Conference on Computer, Communication and Control (IC4) 2015
DOI: 10.1109/ic4.2015.7375665
|View full text |Cite
|
Sign up to set email alerts
|

VisualPal: A mobile app for object recognition for the visually impaired

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 20 publications
(5 citation statements)
references
References 5 publications
0
5
0
Order By: Relevance
“…Shagufta Md.Rafique Bagwan and L. J. Sankpal [7] proposed an android application that helps visually impaired people to recognize objects, Detection of Direction of Maximum Brightness and Color Detection. The result of image recognition is communicated to visually impaired people through pre-recorded verbal messages.…”
Section: Literature Reviewmentioning
confidence: 99%
“…Shagufta Md.Rafique Bagwan and L. J. Sankpal [7] proposed an android application that helps visually impaired people to recognize objects, Detection of Direction of Maximum Brightness and Color Detection. The result of image recognition is communicated to visually impaired people through pre-recorded verbal messages.…”
Section: Literature Reviewmentioning
confidence: 99%
“…All those sensors, however, increase the equipment requirements for such a solution to work, demanding a more specialized setup associated with more expensive hardware. Bagwan and Sankpal (2015) propose a solution called VisualPal that does not require environment adaptations, being able to recognize colors, brightness, and objects. VisualPal uses an arti cial neural network running on Android systems to recognize objects, however its performance is not discussed by the authors since processing time for object recognition is not reported.…”
Section: Related Workmentioning
confidence: 99%
“…As a consequence, our solution is still able to identify objects even if they are moved from one position to another in the environment. Similarly to other approaches based on object recognition (Deb et al, 2013, Bagwan and Sankpal, 2015, Matusiak et al, 2013, our solution uses SIFT descriptors, the already existing network infrastructure of an environment, 2) Client image sent to server; 3 and 4) Check similarity of client image against images in database; 5) Fetch metadata of database image that is most similar to client image; 6) Send metadata back to mobile client, which will provide the user with an audio feedback and a smartphone to provide a low-cost contextual guidance to visually impaired users. In that light, we clearly present the processing time and limitations regarding the use of SIFT descriptors in our approach, which is not clearly discussed by previous authors.…”
Section: Related Workmentioning
confidence: 99%
“…This can help visually impaired user in recognizing the objects in the surrounding. All the detections are provided to the visually impaired user with the help of voice output [2]. Fig.…”
Section: Visualpalmentioning
confidence: 99%