2014
DOI: 10.1007/s10458-014-9279-8
|View full text |Cite
|
Sign up to set email alerts
|

Decision-theoretic planning under uncertainty with information rewards for active cooperative perception

Abstract: Partially observable Markov decision processes (POMDPs) provide a principled framework for modeling an agent's decision-making problem when the agent needs to consider noisy state estimates. POMDP policies take into account an action's influence on the environment as well as the potential information gain. This is a crucial feature for robotic agents which generally have to consider the effect of actions on sensing. However, building POMDP models which reward information gain directly is not straightforward, b… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
50
0

Year Published

2015
2015
2020
2020

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 54 publications
(50 citation statements)
references
References 41 publications
0
50
0
Order By: Relevance
“…If b s i > b s j for all j ∈ N s i for which c s j,next = c s i,next , then robot i wins the auction and maintains the same next cluster and bid, i.e., c s+1 i,next = c s i,next and b s+1 i = b s i . The robots that lose the auction update their set of 'free' clusters by removing cluster c s j,next , i.e., I s j,f = I s j,f \ {c s j,next }, and select a new next cluster and bid according to (15) and (18). If I s i,f = ∅, i.e., if there are no other available clusters for robot i, we set c s+1 i,next = 'depot', effectively controlling the robot to return to a depot after it has completed its current (final) task.…”
Section: A Distributed Auction Mechanismmentioning
confidence: 99%
See 2 more Smart Citations
“…If b s i > b s j for all j ∈ N s i for which c s j,next = c s i,next , then robot i wins the auction and maintains the same next cluster and bid, i.e., c s+1 i,next = c s i,next and b s+1 i = b s i . The robots that lose the auction update their set of 'free' clusters by removing cluster c s j,next , i.e., I s j,f = I s j,f \ {c s j,next }, and select a new next cluster and bid according to (15) and (18). If I s i,f = ∅, i.e., if there are no other available clusters for robot i, we set c s+1 i,next = 'depot', effectively controlling the robot to return to a depot after it has completed its current (final) task.…”
Section: A Distributed Auction Mechanismmentioning
confidence: 99%
“…Exploration can be included as a first step as in [14], and incorporating this step within our approach is a subject of further research. Two approaches that are fundamentally different from choosing a dynamic programming horizon are (i) to represent the reachable belief space with a finite set in a clever way, typically by making an assumption about the family of distributions of the hidden states [1], [2], [7], [13], [15], [16], or (ii) to avoid this representation problem altogether by working in policy space [17]- [19]. With respect to the latter, [19] defines a generalized policy graph, which nonetheless relies on belief space sampling.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…In the decision-theoretic approach, Spaan and Lima [2009] have proposed a POMDP framework to select optimal static cameras in surveillance based on user defined objectives like tracking a particular target. Similarly, Daniyal and Cavallaro [2011] and Daniyal et al [2010] have proposed a finite horizon POMDP model to select the bestview in sports video production that can be adapted to many of the surveillance tasks.…”
Section: Decision-theoretic Approachmentioning
confidence: 99%
“…Decision-theoretic coordination and control for surveillance systems has been explored [Natarajan et al 2012a[Natarajan et al , 2012bSpaan and Lima 2009], in which the control decisions are made in the presence of uncertainties like the target's motion and location. Decision theoretic approaches are better for choosing optimal action in the presence of uncertainties, but computing solutions for problems with large state space is computationally intractable.…”
Section: Summarizing Remarks On M C 3 Strategiesmentioning
confidence: 99%