2021
DOI: 10.1109/tsmc.2019.2957347
|View full text |Cite
|
Sign up to set email alerts
|

DeSIRe: Deep Signer-Invariant Representations for Sign Language Recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1
1

Relationship

1
6

Authors

Journals

citations
Cited by 10 publications
(9 citation statements)
references
References 36 publications
0
9
0
Order By: Relevance
“…Without visual information, there is a tendency to miss key cues and make an incorrect judgment. Therefore, acting like a visual chatbot, ViLT considers nonverbal information, such as facial expressions, gestures, and actions [80], [81], to better understand worker behavior and generate more appropriate responses.…”
Section: Multimodal Multitask Foundation Modelmentioning
confidence: 99%
“…Without visual information, there is a tendency to miss key cues and make an incorrect judgment. Therefore, acting like a visual chatbot, ViLT considers nonverbal information, such as facial expressions, gestures, and actions [80], [81], to better understand worker behavior and generate more appropriate responses.…”
Section: Multimodal Multitask Foundation Modelmentioning
confidence: 99%
“…Deep learning (DL), a subarea of ML, allows studying the underlying resources in data from various processing layers using neural networks, similar to the human brain (Goodfellow et al [36]). As of 2010, DL has attracted immense attention in many fields, especially image recognition and speech recognition (Ferreira et al [37], Schmidhuber [38]).…”
Section: Artificial Intelligence and Important Subareasmentioning
confidence: 99%
“…In recent years, due to the rapid development of RNN in natural language processing (NLP), many researchers attempt to apply RNN in SLR [26], [27], [44]. To model the underlying sign language more precisely, different types of encoder-decoder networks, such as the Transformer model, are also employed [45]- [49]. In addition, many new ideas are introduced in SLR.…”
Section: B the Sequential Modulementioning
confidence: 99%