“…[84]. Buimer et al [62] presented an experiment to recognize facial emotion and sent the feedback through a vibration belt to the user. The authors convey the information of each six emotions by having six vibration units in a belt.…”
Section: Hapticsmentioning
confidence: 99%
“…In addition, social engagement and watching what others are doing, and simulating a variety of visual skills, such as object recognition or text recognition, were important abilities. The importance of knowing people's emotions was discussed in [62]. The authors used computer vision technology to solve the problem.…”
Over a billion people around the world are disabled, among them, 253 million are visually impaired or blind, and this number is greatly increasing due to ageing, chronic diseases, poor environment, and health. Despite many proposals, the current devices and systems lack maturity and do not completely fulfill user requirements and satisfaction. Increased research activity in this field is required to encourage the development, commercialization, and widespread acceptance of low-cost and affordable assistive technologies for visual impairment and other disabilities. This paper proposes a novel approach using a LiDAR with a servo motor and an ultrasonic sensor to collect data and predict objects using deep learning for environment perception and navigation. We adopted this approach in a pair of smart glasses, called LidSonic V2.0, to enable the identification of obstacles for the visually impaired. The LidSonic system consists of an Arduino Uno edge computing device integrated into the smart glasses and a smartphone app that transmits data via Bluetooth. Arduino gathers data, operates the sensors on smart glasses, detects obstacles using simple data processing, and provides buzzer feedback to visually impaired users. The smartphone application collects data from Arduino, detects and classifies items in the spatial environment, and gives spoken feedback to the user on the detected objects. In comparison to image processing-based glasses, LidSonic uses far less processing time and energy to classify obstacles using simple LiDAR data, according to several integer measurements. We comprehensively describe the proposed system's hardware and software design, construct their prototype implementations, and test them in real-world environments. Using the open platforms, WEKA and TensorFlow, the entire LidSonic system is built with affordable off-the-shelf sensors and a microcontroller board costing less than $80. Essentially, we provide designs of an inexpensive, miniature, green device that can be built into, or mounted on, any pair of glasses or even a wheelchair to help the visually impaired. Our approach affords faster inference and decision-making using relatively low energy with smaller data sizes as well as faster communications for the edge, fog, and cloud computing.
“…[84]. Buimer et al [62] presented an experiment to recognize facial emotion and sent the feedback through a vibration belt to the user. The authors convey the information of each six emotions by having six vibration units in a belt.…”
Section: Hapticsmentioning
confidence: 99%
“…In addition, social engagement and watching what others are doing, and simulating a variety of visual skills, such as object recognition or text recognition, were important abilities. The importance of knowing people's emotions was discussed in [62]. The authors used computer vision technology to solve the problem.…”
Over a billion people around the world are disabled, among them, 253 million are visually impaired or blind, and this number is greatly increasing due to ageing, chronic diseases, poor environment, and health. Despite many proposals, the current devices and systems lack maturity and do not completely fulfill user requirements and satisfaction. Increased research activity in this field is required to encourage the development, commercialization, and widespread acceptance of low-cost and affordable assistive technologies for visual impairment and other disabilities. This paper proposes a novel approach using a LiDAR with a servo motor and an ultrasonic sensor to collect data and predict objects using deep learning for environment perception and navigation. We adopted this approach in a pair of smart glasses, called LidSonic V2.0, to enable the identification of obstacles for the visually impaired. The LidSonic system consists of an Arduino Uno edge computing device integrated into the smart glasses and a smartphone app that transmits data via Bluetooth. Arduino gathers data, operates the sensors on smart glasses, detects obstacles using simple data processing, and provides buzzer feedback to visually impaired users. The smartphone application collects data from Arduino, detects and classifies items in the spatial environment, and gives spoken feedback to the user on the detected objects. In comparison to image processing-based glasses, LidSonic uses far less processing time and energy to classify obstacles using simple LiDAR data, according to several integer measurements. We comprehensively describe the proposed system's hardware and software design, construct their prototype implementations, and test them in real-world environments. Using the open platforms, WEKA and TensorFlow, the entire LidSonic system is built with affordable off-the-shelf sensors and a microcontroller board costing less than $80. Essentially, we provide designs of an inexpensive, miniature, green device that can be built into, or mounted on, any pair of glasses or even a wheelchair to help the visually impaired. Our approach affords faster inference and decision-making using relatively low energy with smaller data sizes as well as faster communications for the edge, fog, and cloud computing.
“…Several studies showed that social interaction can be particularly demanding for VIs because communication with others includes exchanges of nonverbal cues such as gaze, gesture, or facial expression that cannot be recognized by VIs [5,36]. Phillips and Proulx insisted that assistive devices should facilitate acquisition of nonverbal information; they defined a set of design criteria in their work: functionality, usability, cognitive demand and aesthetics [25].…”
Section: State-of-the-artmentioning
confidence: 99%
“…The study also highlighted remaining technical and hardware problems that should be resolved before using such system in the wild and the strong heterogeneity of expectations and needs amongst visually impaired users. Buimer et al developed a custom assistive device for VIs that supports emotional recognition by utilizing haptic technologies [5]. In this project, six emotions captured through a camera on glasses were mapped into the same number of vibrators on a waist belt so that the user can receive tactile signals on different spots according to emotional expressions of interlocutors.…”
Recent advances in the field of assistive devices technology represent a great opportunity for improving the quality of life of people with moderate to severe visual impairment. However, it is still unclear what are the precise daily difficulties, needs and expectations of the smart glasses technology for visually impaired individuals. To this aim, we conducted a survey based on three questionnaires to provide qualitative and quantitative insights on those questions across five groups suffering from various visual pathologies ($$N=50$$
N
=
50
). The results clearly showed the importance of developing tailored solutions to fulfill the heterogeneous daily difficulties and needs identified across pathologies. Overall, groups shared similar expectations regarding the assistive smart glasses functionalities in order to improve social interactions.
Work integrating conversations around AI and Disability is vital and valued, particularly when done through a lens of fairness. Yet at the same time, analysing the ethical implications of AI for disabled people solely through the lens of a singular idea of "fairness" risks reinforcing existing power dynamics, either through reinforcing the position of existing medical gatekeepers, or promoting tools and techniques that benefit otherwise-privileged disabled people while harming those who are rendered outliers in multiple ways. In this paper we present two case studies from within computer vision - a subdiscipline of AI focused on training algorithms that can "see" - of technologies putatively intended to help disabled people but, through failures to consider structural injustices in their design, are likely to result in harms not addressed by a "fairness" framing of ethics. Drawing on disability studies and critical data science, we call on researchers into AI ethics and disability to move beyond simplistic notions of fairness, and towards notions of justice.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.