2019
DOI: 10.1111/cgf.13831
|View full text |Cite
|
Sign up to set email alerts
|

Figure Skating Simulation from Video

Abstract: Figure skating is one of the most popular ice sports at the Winter Olympic Games. The skaters perform several skating skills to express the beauty of the art on ice. Skating involves moving on ice while wearing skate shoes with thin blades; thus, it requires much practice to skate without losing balance. Moreover, figure skating presents dynamic moves, such as jumping, artistically. Therefore, demonstrating figure skating skills is even more difficult to achieve than basic skating, and professional skaters oft… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
2

Relationship

2
5

Authors

Journals

citations
Cited by 9 publications
(7 citation statements)
references
References 44 publications
0
7
0
Order By: Relevance
“…Recently, deep reinforcement learning (DRL) has successfully demonstrated its capabilities in solving high-dimensional, continuous control problems including human motion imitation [Bergamin et al 2019;Merel et al 2019;Peng et al 2018Peng et al , 2022Peng et al , 2021Won et al 2020Yu et al 2019], motion control in complex environments [Clegg et al 2018;Liu and Hodgins 2018;Winkler et al 2022;Won et al 2021;Yang et al 2022;] and non-human character control [Ishiwaka et al 2022;Lee et al 2022;Luo et al 2020;]. The control of musculoskeletal characters is no exception for these technological innovations; in particular, controllers based on DRL have been significantly improved in terms of robustness against external perturbation, computational efficiency at runtime, and the scope of reproducible motor skills.…”
Section: Related Workmentioning
confidence: 99%
“…Recently, deep reinforcement learning (DRL) has successfully demonstrated its capabilities in solving high-dimensional, continuous control problems including human motion imitation [Bergamin et al 2019;Merel et al 2019;Peng et al 2018Peng et al , 2022Peng et al , 2021Won et al 2020Yu et al 2019], motion control in complex environments [Clegg et al 2018;Liu and Hodgins 2018;Winkler et al 2022;Won et al 2021;Yang et al 2022;] and non-human character control [Ishiwaka et al 2022;Lee et al 2022;Luo et al 2020;]. The control of musculoskeletal characters is no exception for these technological innovations; in particular, controllers based on DRL have been significantly improved in terms of robustness against external perturbation, computational efficiency at runtime, and the scope of reproducible motor skills.…”
Section: Related Workmentioning
confidence: 99%
“…Learning-based control. Reinforcement learning (RL) control of physically simulated characters has led to a great performance in sophisticated motor skills such as walking, jumping, cart-wheel, and skating [49,38,48,80]. However, while controllers behave well in idealized simulated environments, they often struggle when transferred to the real world, exhibiting infeasible motor-control behaviors due to the difference between simulation and real-world, which is often referred to as the reality gap.…”
Section: Related Work a Legged Robot Controlmentioning
confidence: 99%
“…On the other hand, physics-based motion trackers [42,43] allow us to obtain natural motions in simulation, but its control design requires additional manual efforts, such as feature selection and motion processing. The recent RL-based formulation [49] provides an automated pipeline for developing effective motion imitation control policies from simple reward descriptions, which is capable of learning various motions on simulated characters [23,72,73,49,11,38,46,48,80,44,39], or even on a real quadrupedal robot [53] with manual motion retargeting. We adopt the concept of imitation objective to gain both physically correct motion and interactive control.…”
Section: B Motion Imitationmentioning
confidence: 99%
“…Deep reinforcement learning (DRL) provides a general framework for characters to automatically find a control policy from rewards that can effectively handle uncertainties and perturbations in the system. Recent advancements in DRL algorithms have enabled researchers to create control policies for a variety of character motor skills Liu and Hodgins 2018;Peng et al 2018;Yu et al 2019], including ones for manipulation problems such as solving rubik's cube [Akkaya et al 2019] and opening doors [Rajeswaran et al 2017]. However, due to the high sample complexity of DRL algorithms and the high computation time for non-rigid objects like fluids and cloths, it is most common to apply DRL algorithms to manipulation problems with rigid objects, for which stable and efficient simulation tools are available [Coumans and Bai 2016;Lee et al 2018;.…”
Section: Related Workmentioning
confidence: 99%
“…Rather than creating hand-written controllers for our characters, we draw upon the strengths of reinforcement learning algorithms to develop robust character control policies. Researchers have successfully applied reinforcement learning to create policies that control characters to perform a variety of motor skills such as parkour [Peng et al 2018], basketball dribbling [Liu and Hodgins 2018], ice skating [Yu et al 2019] and dressing [Clegg et al 2018].…”
Section: Introductionmentioning
confidence: 99%