Motion, Interaction and Games 2020
DOI: 10.1145/3424636.3426894
|View full text |Cite
|
Sign up to set email alerts
|

Deep Integration of Physical Humanoid Control and Crowd Navigation

Abstract: Many multi-agent navigation approaches make use of simplified representations such as a disk. These simplifications allow for fast simulation of thousands of agents but limit the simulation accuracy and fidelity. In this paper, we propose a fully integrated physical character control and multi-agent navigation method. In place of sample complex online planning methods, we extend the use of recent deep reinforcement learning techniques. This extension improves on multi-agent navigation models and simulated huma… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 19 publications
(13 citation statements)
references
References 46 publications
0
13
0
Order By: Relevance
“…Haworth et al [HBM*20] introduces a method on the borderline between single character and crowd animation. By employing a method based on the ideas of Hierarchical Reinforcement Learning, they train two policies interacting with one another.…”
Section: Crowd Animationmentioning
confidence: 99%
“…Haworth et al [HBM*20] introduces a method on the borderline between single character and crowd animation. By employing a method based on the ideas of Hierarchical Reinforcement Learning, they train two policies interacting with one another.…”
Section: Crowd Animationmentioning
confidence: 99%
“…Kostrikov 26 identified an implicit bias in the algorithms based on the adversarial imitation learning framework and proposed a new algorithm called discriminator-actor-critic. Haworth 27 trained a low-level policy for walking behavior and a high-level policy for planning by hierarchical reinforcement learning. 28 Curriculum learning was first proposed by Bengio, 29 which is a training strategy 30 that trains the machine learning model from easy data subsets to hard dataset until the whole training dataset.…”
Section: Related Workmentioning
confidence: 99%
“…We solve the above problems from two aspects. First, based on the idea of hierarchical reinforcement learning, 19 the task of the agent moving from the starting point to the goal is decomposed into a series of subtasks by extracting the key navigation points, 20 and then each subtask is planned in a divide and conquer manner. Therefore, how to determine the key navigation points is very important.…”
Section: Extrinsic Rewardmentioning
confidence: 99%