2022
DOI: 10.1038/s41467-022-32231-1
|View full text |Cite
|
Sign up to set email alerts
|

Pushing the limits of remote RF sensing by reading lips under the face mask

Abstract: The problem of Lip-reading has become an important research challenge in recent years. The goal is to recognise speech from lip movements. Most of the Lip-reading technologies developed so far are camera-based, which require video recording of the target. However, these technologies have well-known limitations of occlusion and ambient lighting with serious privacy concerns. Furthermore, vision-based technologies are not useful for multi-modal hearing aids in the coronavirus (COVID-19) environment, where face m… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
19
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2

Relationship

5
2

Authors

Journals

citations
Cited by 26 publications
(19 citation statements)
references
References 24 publications
0
19
0
Order By: Relevance
“…The work of UWB demonstrated the lip reading work with the vowels of A, E, I, O, U and static scenario, with a face mask. The result of 95% approves that the mouth motion brings informative signal for UWB sensing 7 . To expand the work and exploit more possibilities, we added words and sentences for data collection regarding the reference.…”
Section: Literature Survey Of Radar-enabled Speech Recognitionmentioning
confidence: 60%
“…The work of UWB demonstrated the lip reading work with the vowels of A, E, I, O, U and static scenario, with a face mask. The result of 95% approves that the mouth motion brings informative signal for UWB sensing 7 . To expand the work and exploit more possibilities, we added words and sentences for data collection regarding the reference.…”
Section: Literature Survey Of Radar-enabled Speech Recognitionmentioning
confidence: 60%
“…Gesture training is an important part of the GMI system, which is implemented by using VGG16 convolutional neural networks in this scheme, [45,46] and three synthetic image datasets of alphabet and some homemade data based on ASL are applied as the training samples. [47][48][49] The detailed gesture training framework based on VGG16 network can be seen in Section S1 (Supporting Information).…”
Section: Gesture Training Based On Vgg16 Networkmentioning
confidence: 99%
“…Typically, 144 the turbo codes are often constructed with parallel concatenation of two recursive convolutional encoders that have been separated with interleaver 145 . Then, the task is designing the polynomials of the code for individual encoders, and for using a suitable interleaver.…”
Section: Channel Coding Over In‐vivomentioning
confidence: 99%