2015 IEEE International Conference on Evolving and Adaptive Intelligent Systems (EAIS) 2015
DOI: 10.1109/eais.2015.7368798
|View full text |Cite
|
Sign up to set email alerts
|

Acoustic sensor based activity recognition using ensemble of one-class classifiers

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 21 publications
0
4
0
Order By: Relevance
“…Then they forwarded those features to a vanilla CNN for the classification of walking, sitting, falling, and standing. Tripathi et al [41] used acoustic sensors to detect the locomotion activities of humans at bus stops and parks and generated perceptual features. They used an ensemble of one-class classifiers that were based on fuzzy rules.…”
Section: Rs-hlar Using Proximity Sensorsmentioning
confidence: 99%
“…Then they forwarded those features to a vanilla CNN for the classification of walking, sitting, falling, and standing. Tripathi et al [41] used acoustic sensors to detect the locomotion activities of humans at bus stops and parks and generated perceptual features. They used an ensemble of one-class classifiers that were based on fuzzy rules.…”
Section: Rs-hlar Using Proximity Sensorsmentioning
confidence: 99%
“…The evaluation showed the impact of decision-level fusion for human activity classification as the authors achieved 98% accuracy with an average of probability fusion method. Tripathi et al [53] investigated the fuzzy decision rule algorithm that uses simple combination rule for adaptive based human activity identification. The authors formulated new classifier as a batch of new activity details.…”
Section: Multiple Classifier Systemsmentioning
confidence: 99%
“…Giannakopoulos and Siantikos [11] developed an activity recognition system for elderly monitoring that uses non-verbal information from the audio channel. Other research on acoustic-based activity recognition followed similar approaches [15,29]. This prior work, however, focuses on audio signals and sounds from sensors, and has not used textual content from speech to recognize activities.…”
Section: Related Workmentioning
confidence: 99%