2022
DOI: 10.1007/s11227-022-04454-y
|View full text |Cite
|
Sign up to set email alerts
|

Analysis of high-level dance movements under deep learning and internet of things

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(10 citation statements)
references
References 22 publications
0
10
0
Order By: Relevance
“…Different from other published methods, their methods are only composed of fully trainable neural networks and do not rely on any traditional computer graphics methods (Fink et al, 2021). Wang and Tong (2022) further proposed a time consistency method for dynamic pixel loss. Compared with the direct audioto-image method, this cascading method avoids fitting the false correlation between audio-visual signals independent of speech content.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Different from other published methods, their methods are only composed of fully trainable neural networks and do not rely on any traditional computer graphics methods (Fink et al, 2021). Wang and Tong (2022) further proposed a time consistency method for dynamic pixel loss. Compared with the direct audioto-image method, this cascading method avoids fitting the false correlation between audio-visual signals independent of speech content.…”
Section: Discussionmentioning
confidence: 99%
“…To avoid these pixel jitter problems, they also emphasized the network's attention to audio-visual related areas, and proposed a new attention mechanism with dynamically adjustable pixel-level loss. In addition, to generate clearer images with well synchronized facial motion, they proposed a new regression-based discriminator structure, which considers sequence-level and frame-level information (Wang and Tong, 2022). The above two scholars discussed their methods of generating dance movements from different angles.…”
Section: Discussionmentioning
confidence: 99%
“…After the implementation of the confidence fusion scheme, the final jumping action recognition result from figure skating videos is obtained. [10], POSL [1], AHDM [16], ARMHJ [22] and HARM [13] are used for jumping action recognition, and then the true positive rate is calculated according to the action recognition results.…”
Section: Data Setmentioning
confidence: 99%
“…The features are input to improve the deep reinforcement learning network model, and the jump action recognition results are obtained. At the same time, the algorithms in BRMFSJ [10], POSL [1], AHDM [16], ARMHJ [22] and HARM [13] are applied to perform action recognition on UCF 101 dataset. According to the recognition accuracy and recall of different recognition algorithms, the PR curve image is drawn, and then the area under the curve is calculated by 11 point interpolation method to obtain the AP value.…”
Section: Evaluation Metricsmentioning
confidence: 99%
See 1 more Smart Citation