2000
DOI: 10.1111/1467-9868.00225
|View full text |Cite
|
Sign up to set email alerts
|

Maximum Entropy Sampling and Optimal Bayesian Experimental Design

Abstract: When Shannon entropy is used as a criterion in the optimal design of experiments, advantage can be taken of the classical identity representing the joint entropy of parameters and observations as the sum of the marginal entropy of the observations and the preposterior conditional entropy of the parameters. Following previous work in which this idea was used in spatial sampling, the method is applied to standard parameterized Bayesian optimal experimental design. Under suitable conditions, which include non-lin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

3
123
0

Year Published

2001
2001
2017
2017

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 192 publications
(126 citation statements)
references
References 20 publications
3
123
0
Order By: Relevance
“…Our method is related to the framework of "optimal experimental design" in which experiments are designed so as to optimally elicit information about the process under investigation (Sebastiani and Wynn 2000;Atkinson et al 2007). Normative statistical principles from Bayesian inference can, in some cases, be used to select an experimental design that will best resolve the details of participants' underlying cognitive processes (e.g., set the free parameters of a model of the process under scrutiny; Rafferty et al 2012).…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Our method is related to the framework of "optimal experimental design" in which experiments are designed so as to optimally elicit information about the process under investigation (Sebastiani and Wynn 2000;Atkinson et al 2007). Normative statistical principles from Bayesian inference can, in some cases, be used to select an experimental design that will best resolve the details of participants' underlying cognitive processes (e.g., set the free parameters of a model of the process under scrutiny; Rafferty et al 2012).…”
Section: Resultsmentioning
confidence: 99%
“…Indeed, work in machine learning and information theory has established how information in any given task might be optimally selected so as to maximally discriminate between competing hypotheses and accelerate learning (optimal experimental design; Sebastiani and Wynn 2000). Although human learning does not always mirror these optimal strategies, judicious choice of information has been shown to improve learning, for instance of category boundaries (Gureckis and Markant 2012) or speech motor learning (Knock et al 2000).…”
mentioning
confidence: 99%
“…During the design phase, the optimal manoeuvre is selected, which is expected to Preprints of the 20th IFAC World Congress Toulouse, France, July 9-14, 2017 yield the most information for the next inference cycle. The optimal manoeuvre is determined using the idea of maximum entropy sampling, where it is believed the most is learnt by sampling from where the least is known, (Sebastiani and Wynn (2000)). In the following section, the steps used are described in more detail.…”
Section: Fig 1 Adaptive Bayesian Motion Planning Algorithm Flowchartmentioning
confidence: 99%
“…In order to choose the most informative manoeuvre using (6), we need to maximise (12). This means that the best move is to go toward the location whose predictive distribution has maximum entropy (equivalently, the least information); it is called as the principle of maximum entropy sampling (Sebastiani and Wynn (2000)). In other words, the most informative manoeuvre is where the predictive distribution has the most spread and uninformative.…”
Section: Designmentioning
confidence: 99%
“…Since 2000 alone, they have been adopted in mixture hazard models (Louzada-Neto et al 2002), spatio-temporal models (Stroud et al 2001), structural equation models (Zhu and Lee 2001), disease mapping (Green and Richardson 2002), analysis of proportions (Brooks 2001), correlated data and clustered models (Chib and Hamilton 2000, Dunson 2000, Chen and Dey 2000, classification and discrimination (Wruck et al 2001), experimental design and analysis (Nobile andGreen 2000, Sebastiani andWynn 2000), random effects generalised linear models (Lenk and DeSarbo 2000) and binary data (Basu and Mukhopadhyay 2000). Mixtures of Weibulls (Tsionas 2002) and Gammas (Wiper et al 2001) have been considered, along with computational issues associated with MCMC methods (Liang and Wong 2001), issues of convergence (Liang and Wong 2001), the display…”
Section: Extensions To the Mixture Frameworkmentioning
confidence: 99%