2017
DOI: 10.1109/tap.2017.2740974
|View full text |Cite
|
Sign up to set email alerts
|

Cross-Entropy Method for Electromagnetic Optimization With Constraints and Mixed Variables

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 31 publications
(12 citation statements)
references
References 38 publications
0
12
0
Order By: Relevance
“…For probability learning, the probability distribution function p(x) is usually introduced with an indicator u, e.g., p(x, u) can be a Gaussian distribution and u contains its mean and variance [11]. Denoting that L equals to N × (M + 1), the indicator u is a vector of L dimensions, defined as…”
Section: B the Asce-based Offload Learningmentioning
confidence: 99%
“…For probability learning, the probability distribution function p(x) is usually introduced with an indicator u, e.g., p(x, u) can be a Gaussian distribution and u contains its mean and variance [11]. Denoting that L equals to N × (M + 1), the indicator u is a vector of L dimensions, defined as…”
Section: B the Asce-based Offload Learningmentioning
confidence: 99%
“…In particular, from the constant directivity contour curves (corrected Carrel's graph, [1]) and considering antenna length equal to 0.619m, the optimum value of σ is derived equal to 0.1648 and the respective value of τ equal to 0.8891, while the derived antenna has to be composed of 10 dipoles. The largest dipole ( 1 m = ) is considered to be in resonant condition at the lowest frequency ( min 470MHz f = ) of the passband, and therefore its length must be equal to λmax/2, as shown in (8). Also, we decided to set the radius of the shortest dipole ( 10) = m equal to the radius (2 mm) of the dipoles of the optimized LPDA, and therefore the other (larger) dipoles of the LPDA will have larger radii, resulting thus in an antenna that can be fabricated in practice.…”
Section: Definition Of Limits Of Optimization Variablesmentioning
confidence: 99%
“…where x [s] is the association corresponding to the sth item in the resorted sequence obtained from step 5 in Algorithm 1. By following the procedure in each iteration, as inspired in [6], the CE approach can produce a sequence of sampling distributions that are increasingly concentrated around the optimal design.…”
Section: A Problem Formulation Of Association Learningmentioning
confidence: 99%
“…The CE approach was firstly introduced in 1997 [5] and developed in machine learning. The advantage of the CE approach lies in its adaptive update procedure [6], which makes it be inherently capable of solving combinatorial optimization problems in a much simpler way than typical relaxation techniques. To the best of our knowledge, this is the first time that CE method is used to solve the constrained user association problem.…”
Section: Introductionmentioning
confidence: 99%