2019
DOI: 10.1016/j.jspi.2018.03.005
|View full text |Cite
|
Sign up to set email alerts
|

Quasi-Newton algorithm for optimal approximate linear regression design: Optimization in matrix space

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 39 publications
0
3
0
Order By: Relevance
“…Correctness of estimates at the network testing stage amounted to 100% in both tested heavy metal cases. The method of learning that was aimed at minimizing the neural network error values following modification of values of the weight coefficients of neuron input signals was the Quasi-Newton (BFGS) algorithm [41].…”
Section: Numerical Analysismentioning
confidence: 99%
“…Correctness of estimates at the network testing stage amounted to 100% in both tested heavy metal cases. The method of learning that was aimed at minimizing the neural network error values following modification of values of the weight coefficients of neuron input signals was the Quasi-Newton (BFGS) algorithm [41].…”
Section: Numerical Analysismentioning
confidence: 99%
“…Considering that these data are discrete and inconvenient to directly apply, curve fitting is conducted by using optimization software 1stOpt 51 with quasi-Newton algorithm. 52 The selection of curve-fitting algorithm is determined by the overall consideration of fitting accuracy and conciseness. Then, the yield strength and moduli of elasticity of ordinary and prestressed steel bars at elevated temperatures can be approximated, in the form of ratio of high temperature (T) to room temperature (20 C), by the following equations.…”
Section: Yield Strength and Modulus Of Elasticitymentioning
confidence: 99%
“…The Fedorov–Wynn algorithm (Fedorov, 1972; Wynn, 1972) does so by iteratively replacing part of the existing design with the singular design point showing the largest derivative, while the class of multiplicative algorithms (with convergence proven by Yu, 2010a) multiplies every weight in the existing design simultaneously by a factor depending on the derivatives. Both algorithms in their basic form tend to propose a larger than necessary numbers of support points, but there are several adaptions to account for this problem (Gaffke & Schwabe, 2019; Martin & Camacha Gutierrez, 2015; Pronzato, 2013; Yang et al., 2013; Yu, 2011). In this paper, we will focus on the multiplicative algorithm, but our results do not depend on how an optimal design was obtained.…”
Section: Optimal Design Problems and Design Algorithmsmentioning
confidence: 99%