2001
DOI: 10.1299/jsmec.44.103
|View full text |Cite
|
Sign up to set email alerts
|

A Sequential Approximation Method Using Neural Networks for Nonlinear Discrete-Variable Optimization with Implicit Constraints.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2005
2005
2016
2016

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 11 publications
(7 citation statements)
references
References 8 publications
0
7
0
Order By: Relevance
“…In this paper, the 'sequential neural network approximation (SNA) method' [8,9] is used to solve the problem. In this method, first a back-propagation neural network is trained to simulate the feasible domain formed by the implicit constraints using a few representative training data.…”
Section: The Sequential Neural Network Approximation Methodsmentioning
confidence: 99%
“…In this paper, the 'sequential neural network approximation (SNA) method' [8,9] is used to solve the problem. In this method, first a back-propagation neural network is trained to simulate the feasible domain formed by the implicit constraints using a few representative training data.…”
Section: The Sequential Neural Network Approximation Methodsmentioning
confidence: 99%
“…The design variables are radius R = x 1 , length L = x 2 , thickness Ts = x 3 , and thickness Th = x 4 . The objective is to minimize the cost given as Table 2 Comparison of the results Sandgren [4] Qian [15] Kannan [10] Hsu [16] He [18] This paper Through 20 trials, the objective function has attained at 6060 6066 f ≤ ≤…”
Section: Discrete Design Variables Problem 2 (Feasible Domain Is Sepamentioning
confidence: 99%
“…(16) should be updated so as to satisfy Eq. (20). In order to investigate the influence of the penalty function, let us consider the following simple problem with one discrete variable.…”
Section: Properties Of Penalty Function For Discrete Variablesmentioning
confidence: 99%