2024
DOI: 10.3390/s24051610
|View full text |Cite
|
Sign up to set email alerts
|

Exploring the Possibility of Photoplethysmography-Based Human Activity Recognition Using Convolutional Neural Networks

Semin Ryu,
Suyeon Yun,
Sunghan Lee
et al.

Abstract: Various sensing modalities, including external and internal sensors, have been employed in research on human activity recognition (HAR). Among these, internal sensors, particularly wearable technologies, hold significant promise due to their lightweight nature and simplicity. Recently, HAR techniques leveraging wearable biometric signals, such as electrocardiography (ECG) and photoplethysmography (PPG), have been proposed using publicly available datasets. However, to facilitate broader practical applications,… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 54 publications
0
2
0
Order By: Relevance
“…In the formula, 𝛾 ′ ∈ 𝑅 ℎ′×𝜔′×𝑚 represents the output feature map, b represents the bias term [17], * signifies the convolution operation, and subsequently, 𝛾 ′ undergoes an inexpensive mapping. As shown in Formula (2), 𝑦 𝑖 ′ ∈ 𝑌 ′ and 𝜑 𝑖,𝑗 denote the j-th linear transformation of the source feature i.…”
Section: Backbone Section Introduces Lightweight Ghostnet Modulementioning
confidence: 99%
See 1 more Smart Citation
“…In the formula, 𝛾 ′ ∈ 𝑅 ℎ′×𝜔′×𝑚 represents the output feature map, b represents the bias term [17], * signifies the convolution operation, and subsequently, 𝛾 ′ undergoes an inexpensive mapping. As shown in Formula (2), 𝑦 𝑖 ′ ∈ 𝑌 ′ and 𝜑 𝑖,𝑗 denote the j-th linear transformation of the source feature i.…”
Section: Backbone Section Introduces Lightweight Ghostnet Modulementioning
confidence: 99%
“…This study reconstructs the entire fusion network using the novel lightweight Ghost module to reduce model parameters and decrease computational requirements, making it more suitable for deployment on mobile devices, significantly enhancing the model's usability and portability. In the formula, γ ′ ∈ R h′×ω′×m represents the output feature map, b represents the bias term [17], * signifies the convolution operation, and subsequently, γ ′ undergoes an inexpensive mapping. As shown in Formula (2), y ′ i ∈ Y ′ and φ i,j denote the j-th linear transformation of the source feature i.…”
Section: Backbone Section Introduces Lightweight Ghostnet Modulementioning
confidence: 99%