Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct 2016
DOI: 10.1145/2968219.2971459
|View full text |Cite
|
Sign up to set email alerts
|

Multimodal multisensor activity annotation tool

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
10
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 14 publications
(10 citation statements)
references
References 3 publications
0
10
0
Order By: Relevance
“…Applications have been seen in audio-visual speech recognition [52], image captioning [63], machine translation [34], sentiment analysis [55] and affect recognition [30]. In the space of ubiquitous computing, example applications include human activity recognition [1], sleep detection [12] and emotion recognition [36]. Many recognition tasks were previously only primarily performed with unimodal learning, with the availability of low-energy sensors, many such tasks are recently explored using multimodal learning.…”
Section: Related Workmentioning
confidence: 99%
“…Applications have been seen in audio-visual speech recognition [52], image captioning [63], machine translation [34], sentiment analysis [55] and affect recognition [30]. In the space of ubiquitous computing, example applications include human activity recognition [1], sleep detection [12] and emotion recognition [36]. Many recognition tasks were previously only primarily performed with unimodal learning, with the availability of low-energy sensors, many such tasks are recently explored using multimodal learning.…”
Section: Related Workmentioning
confidence: 99%
“…feature extraction). Indeed, Barz et al [ 19 ] highlight that most data acquisition and annotation tools are mostly limited to a particular sensor. This can be attributed to the fact that it seems to be necessary to consider different techniques or feature sets for different kinds of sensors.…”
Section: Related Workmentioning
confidence: 99%
“…The segmentation and annotation of time-series can be performed using graphical user interfaces (GUIs) [ 10 , 11 , 12 , 13 , 14 , 15 , 16 , 17 , 18 , 19 ]. Although some existing GUIs process similar data with similar goals, for example the annotation of human activities from videos and wearable sensors, they have been developed independently of each other [ 15 , 16 , 17 , 18 , 19 ]. This means the existing code is not re-used within the community, and thus, similar functionalities are implemented in numerous ways, leading to duplicated work.…”
Section: Introductionmentioning
confidence: 99%
“…One of the reasons for code not being re-used includes code not being publicly available [ 15 , 16 , 17 , 18 ]. According to a study conducted by Stodden et al [ 21 ], the major reason for not publishing code is that researchers declare their code as not being cleaned up and undocumented.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation