2018
DOI: 10.1371/journal.pone.0194737
|View full text |Cite
|
Sign up to set email alerts
|

Conveying facial expressions to blind and visually impaired persons through a wearable vibrotactile device

Abstract: In face-to-face social interactions, blind and visually impaired persons (VIPs) lack access to nonverbal cues like facial expressions, body posture, and gestures, which may lead to impaired interpersonal communication. In this study, a wearable sensory substitution device (SSD) consisting of a head mounted camera and a haptic belt was evaluated to determine whether vibrotactile cues around the waist could be used to convey facial expressions to users and whether such a device is desired by VIPs for use in dail… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
19
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 33 publications
(19 citation statements)
references
References 35 publications
0
19
0
Order By: Relevance
“…Additionally, various data mining and machine learning techniques could be applied to set up models for an individual's emotional profile based on sensor-based physiological and behavioral recordings. This could further be transferred to various positive computing use-cases 88 , such as helping children with autism in their social communication 89,90 , helping people who are blind to read facial expressions and get the emotion information of their peers 91 , finding opportune moments for conversational user interactions 92,93 , assisting social anxiety disorder patients to overcome their conditions 94 , allowing robots to interact more intelligently with people 95,96 , and monitoring signs of frustration and emotional saturation that affect attention while driving, to enhance driver safety 97,98 .…”
Section: Usage Notesmentioning
confidence: 99%
“…Additionally, various data mining and machine learning techniques could be applied to set up models for an individual's emotional profile based on sensor-based physiological and behavioral recordings. This could further be transferred to various positive computing use-cases 88 , such as helping children with autism in their social communication 89,90 , helping people who are blind to read facial expressions and get the emotion information of their peers 91 , finding opportune moments for conversational user interactions 92,93 , assisting social anxiety disorder patients to overcome their conditions 94 , allowing robots to interact more intelligently with people 95,96 , and monitoring signs of frustration and emotional saturation that affect attention while driving, to enhance driver safety 97,98 .…”
Section: Usage Notesmentioning
confidence: 99%
“…Many assistive technologies for blind people has traditionally focused on mobility, navigation, and object recognition; but more recently on social interaction as well [24], [25]. An increasing number of studies explored to assist blind people in social situations based on smart technologies, such as identify faces and facial expressions of the sighted counterparts [26], [7]- [28]. A facial recognition system can help blind people to identify colleagues in group meetings [27].…”
Section: B Social Signal Perception and Technologymentioning
confidence: 99%
“…In the HFD system, 48 vibration motors are mounted on the back of a chair to best map the facial movements and the corresponding vibration cues. Buimer et al [26] introduced a sensory substitution device (SSD) to support blind people to determine facial expressions of their interaction partners. The SSD classifies six universal facial expressions [30] into emotions, conveying vibrotactile stimuli by a belt.…”
Section: Instead Of Delivering Information Through Sense Of Hearingmentioning
confidence: 99%
“…Use Case Emotions. Persons with visual impairments often have difficulties perceiving nonverbal conversational clues, such as facial expressions [2], making social interactions challenging. To support a user during a conversation, an assistive system has to be able to first extract and process these clues and then convey the information in an adapted format to the user.…”
Section: Related Workmentioning
confidence: 99%
“…A neural network can provide more robustness to image properties or fuse multiple modalities [4]. For presenting the information, several researchers have proposed solutions for people with VI using audio [1] or tactile [2] feedback. Zuniga et al proposed a system providing visual feedback [14].…”
Section: Related Workmentioning
confidence: 99%