2017 IEEE Region 10 Humanitarian Technology Conference (R10-Htc) 2017
DOI: 10.1109/r10-htc.2017.8288999
|View full text |Cite
|
Sign up to set email alerts
|

Implementation of a reading device for bengali speaking visually handicapped people

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
2
1
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 4 publications
0
2
0
Order By: Relevance
“…In this case, several studies that employ image and gesture recognition, and voice detection describe the accuracy of the methods employed in them through several techniques and software tools such as Convolutional Neuronal Networks (CNNs), Support Vector Machines (SVMs), Haar cascade classifier, or Speeded Up Robust Features (SURF) method. For instance, ATs developed for visual disabilities report an accuracy between 63% and 95.1% utilizing CNNs and You Only Look Once (YOLO) [95], [121], 88% to 90% for SURF method [94], 90% for blob detection algorithm [122], 84% for Google Cloud Vision API [123], and 85% for Tesseract [124] which is a tool for OCR applications. Similarly, ATs for mobility disabilities using EEG and EMG signals report an accuracy between 80% employing both SVMs [125] and NeuroSky MindWave headset [92], 83% with the Receiver Operating Characteristic (ROC) [126], and 97.1% for detection of facial expressions through Viola-Jones algorithm [96].…”
Section: ) Research Topicsmentioning
confidence: 99%
See 1 more Smart Citation
“…In this case, several studies that employ image and gesture recognition, and voice detection describe the accuracy of the methods employed in them through several techniques and software tools such as Convolutional Neuronal Networks (CNNs), Support Vector Machines (SVMs), Haar cascade classifier, or Speeded Up Robust Features (SURF) method. For instance, ATs developed for visual disabilities report an accuracy between 63% and 95.1% utilizing CNNs and You Only Look Once (YOLO) [95], [121], 88% to 90% for SURF method [94], 90% for blob detection algorithm [122], 84% for Google Cloud Vision API [123], and 85% for Tesseract [124] which is a tool for OCR applications. Similarly, ATs for mobility disabilities using EEG and EMG signals report an accuracy between 80% employing both SVMs [125] and NeuroSky MindWave headset [92], 83% with the Receiver Operating Characteristic (ROC) [126], and 97.1% for detection of facial expressions through Viola-Jones algorithm [96].…”
Section: ) Research Topicsmentioning
confidence: 99%
“…4. OCR reading and dictating assistants - [123], [124], [172]- [176], [176]- [179] [57], [63], [66] 5. Color detection and compensation cluded low-cost GPS with the accompaniment of ultrasonic sensors in order to geolocate disabled persons and help them to navigate in outdoor settings.…”
Section: Educational Devices and Materialsmentioning
confidence: 99%