2022
DOI: 10.1016/j.cmpb.2022.106904
|View full text |Cite
|
Sign up to set email alerts
|

Reinforcement learning coupled with finite element modeling for facial motion learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 7 publications
(5 citation statements)
references
References 29 publications
0
5
0
Order By: Relevance
“…PSO-based decision-making employs a search space where each agent's position represents a potential solution, and movement within this space is influenced by personal bests and the swarm's global best. The convergence and stability of decision-making algorithms are crucial for the swarm's effectiveness [33]. Robustness is often enhanced by incorporating fault-tolerant mechanisms that allow the swarm to compensate for malfunctioning agents.…”
Section: ๐‘ƒ๐‘ƒ(๐‘ก๐‘ก + 1) = (๐‘ƒ๐‘ƒ(๐‘ก๐‘ก) โ‹… ๐œŒ๐œŒ) + ๐›ฅ๐›ฅ๐‘ƒ๐‘ƒmentioning
confidence: 99%
“…PSO-based decision-making employs a search space where each agent's position represents a potential solution, and movement within this space is influenced by personal bests and the swarm's global best. The convergence and stability of decision-making algorithms are crucial for the swarm's effectiveness [33]. Robustness is often enhanced by incorporating fault-tolerant mechanisms that allow the swarm to compensate for malfunctioning agents.…”
Section: ๐‘ƒ๐‘ƒ(๐‘ก๐‘ก + 1) = (๐‘ƒ๐‘ƒ(๐‘ก๐‘ก) โ‹… ๐œŒ๐œŒ) + ๐›ฅ๐›ฅ๐‘ƒ๐‘ƒmentioning
confidence: 99%
“…The basis of a DRL network is made up of an agent and an environment, following an action-reward type of operation. The interaction begins in the environment with the sending of its state to the agent, which takes an action consistent with the state received, according to which it is subsequently rewarded or penalized by the environment [4,44,[46][47][48]. RL is considered an autonomous learning technique that does not require labeled data but for which search and value function approximation are vital tools [4].…”
Section: Reinforcement Learning Conceptsmentioning
confidence: 99%
“…Current RL methods still present some challenges, namely the efficiency of the learning data and the ability to generalize to new scenarios [49]. Nevertheless, this group of techniques has been used with tremendous theoretical and practical achievements in diverse research topics such as robotics, gaming, biological systems, autonomous driving, computer vision, healthcare, and others [44,48,[50][51][52][53].…”
Section: Reinforcement Learning Conceptsmentioning
confidence: 99%
See 1 more Smart Citation
“…Additionally, merging a 3D face model reconstructed from the patient with an animation of practicing rehabilitation exercises can generate a realistic animation. This may help patients learn facial motion and practice rehabilitation exercises more effectively [ 42 ]. The objective of the present study was to apply these state-of-the-art methods to reconstruct the 3D face shape models of facial palsy patients in natural and mimic postures from one single image.…”
Section: Introductionmentioning
confidence: 99%