Proceedings of the 2017 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2017 AC 2017
DOI: 10.1145/3123024.3125616
|View full text |Cite
|
Sign up to set email alerts
|

Multimodal data collection framework for mental stress monitoring

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 12 publications
(4 citation statements)
references
References 10 publications
0
4
0
Order By: Relevance
“…Applications have been seen in audio-visual speech recognition [52], image captioning [63], machine translation [34], sentiment analysis [55] and affect recognition [30]. In the space of ubiquitous computing, example applications include human activity recognition [1], sleep detection [12] and emotion recognition [36]. Many recognition tasks were previously only primarily performed with unimodal learning, with the availability of low-energy sensors, many such tasks are recently explored using multimodal learning.…”
Section: Related Workmentioning
confidence: 99%
“…Applications have been seen in audio-visual speech recognition [52], image captioning [63], machine translation [34], sentiment analysis [55] and affect recognition [30]. In the space of ubiquitous computing, example applications include human activity recognition [1], sleep detection [12] and emotion recognition [36]. Many recognition tasks were previously only primarily performed with unimodal learning, with the availability of low-energy sensors, many such tasks are recently explored using multimodal learning.…”
Section: Related Workmentioning
confidence: 99%
“…To define σ , it is also necessary to determine for which value of the difference HR − HR 0 the subject can be considered sufficiently stressed or scared to justify a considerable decrease of the cobot speed; we refer to this value as HR . Experiments to evaluate the range of HR variability due to stress and fear were conducted with human participants watching scary movies 30 or pictures 31 , executing cognitive tasks 32 and interacting with robots [33][34][35] . The study of Weistroffer et al 34 , which studied the acceptability for human participants of the presence of robots in assembly lines, is the closest to our work.…”
Section: Hr-based Speed Modulationmentioning
confidence: 99%
“…Using wearable devices in AfC is a case that many researchers have become interested in recently, due to the many possible applications, such as ambient assisted living [12] or social analysis of a group of people [13], among other applications, in areas ranging from validation between such devices and medical-quality platforms [14,15,16], through emotion detection and recognition [12,14,17,18,19,20,21], to creating new apparatuses that enable unobtrusive data acquisition [16,22]; the last approach being an answer to the remarks presented in [8]. It is worth mentioning that, in terms of AfG, sensors incorporated into gloves and shoes or gaming equipment (such as the mouse or game pad), are much more appropriate for active gameplay.…”
Section: Introductionmentioning
confidence: 99%