2021
DOI: 10.3389/fcomp.2021.792065
|View full text |Cite
|
Sign up to set email alerts
|

Opportunity++: A Multimodal Dataset for Video- and Wearable, Object and Ambient Sensors-Based Human Activity Recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 14 publications
(9 citation statements)
references
References 33 publications
0
9
0
Order By: Relevance
“…However, datasets shown in Table 1 mostly focus on locomotion activities and activities of daily living. Only a few, such as [26,27,35,51], include sporadic, transition or complex activities, and many datasets that do include sports [16,33] aggregate an entire sport into a single activity. Published sports studies tend to not release their datasets publicly or only upon request-with Trost et al [17] and Bock et al [51] as the only exceptions, as shown in Tables 2 and 3.…”
Section: Discussionmentioning
confidence: 99%
“…However, datasets shown in Table 1 mostly focus on locomotion activities and activities of daily living. Only a few, such as [26,27,35,51], include sporadic, transition or complex activities, and many datasets that do include sports [16,33] aggregate an entire sport into a single activity. Published sports studies tend to not release their datasets publicly or only upon request-with Trost et al [17] and Bock et al [51] as the only exceptions, as shown in Tables 2 and 3.…”
Section: Discussionmentioning
confidence: 99%
“…The software Matlab, hardware Intel Core i7 at 2.4 GHz along with 24 GB of RAM are utilized to assess the proposed system. Two indoor activities-based datasets known as HWU-USP [ 33 ] and Opportunity++ [ 34 ] are utilized for system performance measurement. A 10-fold cross-validation technique has been used over the dataset for training purposes.…”
Section: Experimental Setup and Evaluationmentioning
confidence: 99%
“…Whereas the ambient sensors, including 13 switches and 8 3D acceleration sensors, were attached to objects like milk, spoons, water bottles, glasses, drawers, and doors. Videos were recorded at 640 × 480 pixels and 10fps [ 34 ]. Figure 11 presents a few sample images captured during video recordings where a subject is (a) cleaning the table; (b) opening the door; (c) opening the drawer; (d) closing the door; (e) opening the door; and (f) closing the dishwasher.…”
Section: Experimental Setup and Evaluationmentioning
confidence: 99%
See 1 more Smart Citation
“…A deep learning framework will help facilities to cope with high costs and nursing shortages via ADL recognition [ 20 ]. Multiple hyper parameters can be used for each deep learning model to adjust the ADL recognition [ 21 ]. Therefore, we have proposed a unique framework for the ADL recognition of elderly people at smart homes and facilities using IoT-based multisensory devices.…”
Section: Introductionmentioning
confidence: 99%