Proceedings of the 2015 ACM on International Conference on Multimodal Interaction 2015
DOI: 10.1145/2818346.2830598
|View full text |Cite
|
Sign up to set email alerts
|

The Grenoble System for the Social Touch Challenge at ICMI 2015

Abstract: International audienceNew technologies and especially robotics is going towards more natural user interfaces. Works have been done in different modality of interaction such as sight (visual computing), and audio (speech and audio recognition) but some other modalities are still less researched. The touch modality is one of the less studied in HRI but could be valuable for naturalistic interaction. However touch signals can vary in semantics. It is therefore necessary to be able to recognize touch gestures in … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

3
35
0

Year Published

2016
2016
2020
2020

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 20 publications
(38 citation statements)
references
References 20 publications
3
35
0
Order By: Relevance
“…This indicated that the additional features and the use of more complex classification methods with hyperparameter optimization have improved the accuracy. Results reported in this paper fall within the range of 26-61 % accuracy that was reported for a data challenge using the CoST data set [3,12,18,36]. Our results are comparable to those reported in the work of Gaus et al and Ta et al who reported accuracies up to 59 and 61 %, respectively using random forest [12,36].…”
Section: Classification Results and Touch Gesture Confusionsupporting
confidence: 90%
See 1 more Smart Citation
“…This indicated that the additional features and the use of more complex classification methods with hyperparameter optimization have improved the accuracy. Results reported in this paper fall within the range of 26-61 % accuracy that was reported for a data challenge using the CoST data set [3,12,18,36]. Our results are comparable to those reported in the work of Gaus et al and Ta et al who reported accuracies up to 59 and 61 %, respectively using random forest [12,36].…”
Section: Classification Results and Touch Gesture Confusionsupporting
confidence: 90%
“…Results reported in this paper fall within the range of 26-61 % accuracy that was reported for a data challenge using the CoST data set [3,12,18,36]. Our results are comparable to those reported in the work of Gaus et al and Ta et al who reported accuracies up to 59 and 61 %, respectively using random forest [12,36]. However it should be noted that the data challenge contained a subset of CoST (i.e., gentle and normal variants) and that the train and test data division was different from the leave-one-subjectout cross-validation results reported in this paper [24].…”
Section: Classification Results and Touch Gesture Confusionsupporting
confidence: 75%
“…Ta et al [108] random forest a 61.3% Ta et al [108] random forest b 60.8% Ta et al [108] SVM b 60.5% Ta et al [108] SVM a 59.9% Gaus et al [41] random forest 58.7% Gaus et al [41] multiboost 58.2% Hughes et al [53] logistic regression 47.2%…”
Section: Papermentioning
confidence: 99%
“…This increased the difficulty of automatic segmentation based on pressure differences over time. Ta et al explored additional techniques for automatic segmentation to further reduce the number of excess frames [108]. However, these methods for automatic segmentation did not improve classification.…”
Section: Data Pre-processingmentioning
confidence: 99%
See 1 more Smart Citation