2022
DOI: 10.48084/etasr.4913
|View full text |Cite
|
Sign up to set email alerts
|

Fusion Machine Learning Strategies for Multi-modal Sensor-based Hand Gesture Recognition

Abstract: Hand gesture recognition has attracted the attention of many scientists, because of its high applicability in fields such as sign language expression and human machine interaction. Many approaches have been deployed to detect and recognize hand gestures, like wearable devices, image information, and/or a combination of sensors and computer vision. However, the method of using wearable sensors brings much higher accuracy and is less affected by occlusion, lighting conditions, and complex background. Existing so… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 28 publications
0
2
0
Order By: Relevance
“…In [10], a plug-and-play module, called cross-fusion, was added to the YOLOv5 model to combine the features of multiple convolutional layers, but its improved performance came at the cost of increased computational complexity. In [11][12], a multimodal sensor fusion technique was proposed, which presented an enhanced object detection rate for autonomous vehicles. However, the integration of four different sensors and the simultaneous processing of their outputs in real time proves to be cumbersome and expensive compared to other methods.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…In [10], a plug-and-play module, called cross-fusion, was added to the YOLOv5 model to combine the features of multiple convolutional layers, but its improved performance came at the cost of increased computational complexity. In [11][12], a multimodal sensor fusion technique was proposed, which presented an enhanced object detection rate for autonomous vehicles. However, the integration of four different sensors and the simultaneous processing of their outputs in real time proves to be cumbersome and expensive compared to other methods.…”
Section: Related Workmentioning
confidence: 99%
“…Rather than complicating the design of Deep Neural Networks (DNN) [9][10][11][12][18][19], a separate filter was developed to be placed between the camera and the CNN module. This filter mechanism enriches the image captured by the camera and feeds it to CNN models, as shown in Figure 2.…”
Section: B Riod Architecturementioning
confidence: 99%