2014
DOI: 10.1609/aimag.v35i4.2556
|View full text |Cite
|
Sign up to set email alerts
|

Multirobot Coordination for Space Exploration

Abstract: Teams of artificially intelligent planetary rovers have tremendous potential for space exploration, allowing for reduced cost, increased flexibility and increased reliability. However, having these multiple autonomous devices acting simultaneously leads to a problem of coordination: to achieve the best results, the they should work together. This is not a simple task. Due to the large distances and harsh environments, a rover must be able to perform a wide variety of tasks with a wide variety of potential team… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
5
2
2

Relationship

1
8

Authors

Journals

citations
Cited by 26 publications
(13 citation statements)
references
References 33 publications
0
13
0
Order By: Relevance
“…This is because each item (apart from the top and bottom ranked) is equally associated with positive and negative outcomes during training trials. Consequently, in developing an RL account (hereafter termed RL-ELO) capable of successful hierarchy learning performance, we sought inspiration from algorithms used to update player ratings in games (e.g., Yliniemi and Tumer, 2013) (e.g., the ELO rating system in chess; see Supplemental Experimental Procedures), whose critical component is to increase or decrease the rating (i.e., power) of the winning or losing individual in a pairwise contest or trial as a function of the rating of one’s opponent (i.e., the winner has a relatively small update if the opponent was estimated to be much less powerful). This has been proved to work even in gargantuan problems.…”
Section: Resultsmentioning
confidence: 99%
“…This is because each item (apart from the top and bottom ranked) is equally associated with positive and negative outcomes during training trials. Consequently, in developing an RL account (hereafter termed RL-ELO) capable of successful hierarchy learning performance, we sought inspiration from algorithms used to update player ratings in games (e.g., Yliniemi and Tumer, 2013) (e.g., the ELO rating system in chess; see Supplemental Experimental Procedures), whose critical component is to increase or decrease the rating (i.e., power) of the winning or losing individual in a pairwise contest or trial as a function of the rating of one’s opponent (i.e., the winner has a relatively small update if the opponent was estimated to be much less powerful). This has been proved to work even in gargantuan problems.…”
Section: Resultsmentioning
confidence: 99%
“…This quickly introduces another kind of challenge: the coordination of multiple robots within an uncertain and unsafe environment. The authors in [25] discussed the benefits and challenges of multi-robot coordination from the perspective of planetary exploration. In their work, the appropriateness of reinforcement learning to overcome these challenges was also presented.…”
Section: Space Explorationmentioning
confidence: 99%
“…Neural networks have also been successful in many direct control tasks (Jorgensen & Schley, 1995;Yliniemi et al, 2014a). An ANN is customized for a particular task through a search for 'weights', which dictate the output of an ANN, given an input.…”
Section: Artificial Neural Networkmentioning
confidence: 99%
“…An ANN is a powerful function approximator, which has been used in tasks as varied as weather forecasting (Mellit & Pavan, 2010), medical diagnosis (Baxt, 1991), and dynamic control (Lewis et al ., 1998; Yliniemi et al ., 2014b). Neural networks have also been successful in many direct control tasks (Jorgensen & Schley, 1995; Yliniemi et al ., 2014a). An ANN is customized for a particular task through a search for ‘weights’, which dictate the output of an ANN, given an input.…”
Section: Introductionmentioning
confidence: 99%