2013
DOI: 10.1201/b15054
|View full text |Cite
|
Sign up to set email alerts
|

Optimal Design for Nonlinear Response Models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
240
0

Year Published

2014
2014
2021
2021

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 172 publications
(258 citation statements)
references
References 0 publications
1
240
0
Order By: Relevance
“…The lift-one algorithm is much faster than commonly used optimization techniques (Table 2), including Nelder-Mead, quasi-Newton, conjugate-gradient, simulated annealing (for a comprehensive reference, see Nocedal and Wright (1999)), as well as popular design algorithms for similar purposes including the Fedorov-Wynn (Fedorov (1972), Fedorov andHackl (1997), Fedorov and Leonov (2014)), Multiplicative (Titterington (1976(Titterington ( , 1978, Silvey, Titterington, and Torsney (1978)), and Cocktail (Yu (2010)) algorithms. We utilized the function constrOptim in R to implement Nelder-Mead, quasi-Newton, conjugategradient, and simulated annealing algorithms.…”
Section: Lift-one Algorithm For Maximizing F (P) = |X ′ W X|mentioning
confidence: 99%
See 1 more Smart Citation
“…The lift-one algorithm is much faster than commonly used optimization techniques (Table 2), including Nelder-Mead, quasi-Newton, conjugate-gradient, simulated annealing (for a comprehensive reference, see Nocedal and Wright (1999)), as well as popular design algorithms for similar purposes including the Fedorov-Wynn (Fedorov (1972), Fedorov andHackl (1997), Fedorov and Leonov (2014)), Multiplicative (Titterington (1976(Titterington ( , 1978, Silvey, Titterington, and Torsney (1978)), and Cocktail (Yu (2010)) algorithms. We utilized the function constrOptim in R to implement Nelder-Mead, quasi-Newton, conjugategradient, and simulated annealing algorithms.…”
Section: Lift-one Algorithm For Maximizing F (P) = |X ′ W X|mentioning
confidence: 99%
“…Theorem 1 is essentially a specialized version of the general equivalence theorem on a pre-determined finite set of design points. Unlike the usual form of the equivalence conditions (for examples, see Kiefer (1974), Pukelsheim (1993), Atkinson, Donev, and Tobias (2007), Yang (2012), Fedorov andLeonov (2014)) where the inverse matrix of X ′ W X needs to be calculated, Theorem 1 is expressed in terms of the quantities f (p), f i (1/2) and f i (0) only. These expressions are critical for the algorithms proposed later.…”
Section: Characterization Of Locally D-optimal Designsmentioning
confidence: 99%
“…The predicted M F after linearisation is a block matrix with a block corresponding to derivatives of the log-likelihood with respect to the fixed effects, a block for derivatives with respect to the standard derivation terms and a block containing mixed derivatives with respect to all parameters. In our work, the block of mixed derivatives was set to 0 for linearisation, based on publications showing the better performance of the block diagonal expression compared with the full one (Mielke and Schwabe, 2010;Fedorov and Leonov, 2014;Nyberg et al, 2014).…”
Section: Fisher Information Matrixmentioning
confidence: 99%
“…The added value of the Bayesian approach in optimal experimental design has been demonstrated convincingly by Woods et al (2006) for generalized linear models in an industrial context and by Sándor and Wedel (2001) for binary choice models in a marketing context. In the context of the optimal design of biopharmaceutical experiments, Fedorov and Leonov (2014) mention the main advantage of Bayesian optimal designs, namely that they take into account prior uncertainty about the unknown parameters. However, they also indicate that this leads to computationally demanding optimization problems.…”
Section: Introductionmentioning
confidence: 99%