2022
DOI: 10.3390/s22176517
|View full text |Cite
|
Sign up to set email alerts
|

Cross-Modal Reconstruction for Tactile Signal in Human–Robot Interaction

Abstract: A human can infer the magnitude of interaction force solely based on visual information because of prior knowledge in human–robot interaction (HRI). A method of reconstructing tactile information through cross-modal signal processing is proposed in this paper. In our method, visual information is added as an auxiliary source to tactile information. In this case, the receiver is only able to determine the tactile interaction force from the visual information provided. In our method, we first process groups of p… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 50 publications
0
1
0
Order By: Relevance
“…Cross-modal research has attracted a lot of attention recently [35][36][37][38][39][40][41][42], especially some video-related work [43,44], which has achieved good results. Additionally, with the great success of contrastive learning in the field of unsupervised representation learning, more and more researchers are applying contrastive learning methods to cross-modal studies [45][46][47][48].…”
Section: Cross-modal Contrastive Learningmentioning
confidence: 99%
“…Cross-modal research has attracted a lot of attention recently [35][36][37][38][39][40][41][42], especially some video-related work [43,44], which has achieved good results. Additionally, with the great success of contrastive learning in the field of unsupervised representation learning, more and more researchers are applying contrastive learning methods to cross-modal studies [45][46][47][48].…”
Section: Cross-modal Contrastive Learningmentioning
confidence: 99%