2022
DOI: 10.1109/lra.2021.3135560
|View full text |Cite
|
Sign up to set email alerts
|

CoMet: Modeling Group Cohesion for Socially Compliant Robot Navigation in Crowded Scenes

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
14
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 15 publications
(14 citation statements)
references
References 33 publications
0
14
0
Order By: Relevance
“…Much research has been conducted on identifying group features and group discovery [1]- [4]. With video as input, [2] uses YOLOv2 for object detection, converts the detected objects into feature vectors, obtains a feature matrix, and uses it as input for clustering, grouping nearby people at the spatial level.…”
Section: A Human Group Learningmentioning
confidence: 99%
See 1 more Smart Citation
“…Much research has been conducted on identifying group features and group discovery [1]- [4]. With video as input, [2] uses YOLOv2 for object detection, converts the detected objects into feature vectors, obtains a feature matrix, and uses it as input for clustering, grouping nearby people at the spatial level.…”
Section: A Human Group Learningmentioning
confidence: 99%
“…It allows the robot to anticipate potential collisions or other interactions and adjust its trajectory accordingly. However, these algorithms have limitations, such as needing to carry out expensive online computations or having reduced accuracy under certain conditions [1]. For example, when the crowd density is high, computing multiple features online takes too long and negatively impacts the performance of the navigation algorithm.…”
Section: Introductionmentioning
confidence: 99%
“…For the navigation of social robots, parameters such as the trajectory, position, or speed of the movements of people or the robot itself were considered, but they did not take into account the emotions of multiple people [ 12 , 56 , 57 , 58 ]. There are studies that consider the influence of a robot within a group of people [ 13 , 16 , 59 ], but the detection of group emotions was not carried out and even less the detection of the emotion of an environment. There are very few studies proposing methods for group emotion estimation.…”
Section: Related Workmentioning
confidence: 99%
“…Developing systems with this perspective makes the robot able to adapt to social groups of humans [ 12 ]. The detection of groups of people improves the navigation of a social robot in indoor and outdoor environments, and the detection of group emotions allows the robot to improve HRI, exhibiting acceptable social behaviour [ 13 , 14 , 15 , 16 ], as well as associating the group emotion with the scene in which the group is participating. Nevertheless, most existing studies related to detecting group emotions are based on third-person cameras [ 17 , 18 , 19 , 20 , 21 ], but their complexity makes them unsuitable for social robots with egocentric vision due to their sensory capacity.…”
Section: Introductionmentioning
confidence: 99%
“…According to [4] as many as 215,660 people received warnings for violating health protocols at tourist attractions during May 2021. Therefore, in maximizing the application of health protocols [5] has created a COVID-Robot technology that can detect distance in a crowd and body temperature. However, the output of the COVID-Robot has not been able to directly warn of violations of health protocols.…”
Section: Introductionmentioning
confidence: 99%