2022
DOI: 10.48550/arxiv.2204.06949
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Federated Learning for Vision-based Obstacle Avoidance in the Internet of Robotic Things

Abstract: Deep learning methods have revolutionized mobile robotics, from advanced perception models for an enhanced situational awareness to novel control approaches through reinforcement learning. This paper explores the potential of federated learning for distributed systems of mobile robots enabling collaboration on the Internet of Robotic Things. To demonstrate the effectiveness of such an approach, we deploy wheeled robots in different indoor environments. We analyze the performance of a federated learning approac… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
7
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
1
1

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(7 citation statements)
references
References 18 publications
(24 reference statements)
0
7
0
Order By: Relevance
“…The details of the data gathering, including the usage of the photorealistic simulator Nvidia Isaac Sim, the environment settings, and data distributions, are detailed in [5], from where we reuse the base simulation and training datasets. The datasets from [5] include data from three distinct scenarios in NVIDIA Isaac Sim and from Jetbot robots deployed in three realworld rooms to train and validate the vision-based obstacle avoidance models.…”
Section: A Data Collection For Flmentioning
confidence: 99%
See 4 more Smart Citations
“…The details of the data gathering, including the usage of the photorealistic simulator Nvidia Isaac Sim, the environment settings, and data distributions, are detailed in [5], from where we reuse the base simulation and training datasets. The datasets from [5] include data from three distinct scenarios in NVIDIA Isaac Sim and from Jetbot robots deployed in three realworld rooms to train and validate the vision-based obstacle avoidance models.…”
Section: A Data Collection For Flmentioning
confidence: 99%
“…The details of the data gathering, including the usage of the photorealistic simulator Nvidia Isaac Sim, the environment settings, and data distributions, are detailed in [5], from where we reuse the base simulation and training datasets. The datasets from [5] include data from three distinct scenarios in NVIDIA Isaac Sim and from Jetbot robots deployed in three realworld rooms to train and validate the vision-based obstacle avoidance models. In this work, the datasets from the simulator are represented as S i, i∈{0,1,2} where i indicates the simulated environments, including a hospital, office, and warehouse.The three real-world datasets are denoted as R i, i∈{0,1,2} where i indicates office spaces, hallways, and laboratory environments, respectively.…”
Section: A Data Collection For Flmentioning
confidence: 99%
See 3 more Smart Citations