“…In the static procedure, η k varied from 0.01 to 0.9 with step 0.01, whereas the choice of the dynamic procedure is reported in Table 3, where five different options are examined. (1) Initialize ρ 0 � z 0 satisfying constraints [25,26] (ρ 0 is a continuous vector, z 0 is a discrete vector, both of dimension M + 1) (2) Initialize ρ * � ρ 0 , z * � z 0 (ρ * is the optimal solution of the continuous problem) (3) Initialize h � 0 (4) while ((k ≤ K) ∨ (h ≤ H)) do (K and H integer parameters, Form the selection set S(ρ k ) (steps 5-13): S(ρ k ) is a set of discrete vectors) [29], where ∇ j OF(ρ k ) � OF(p) − OF(q), where k satisfies p − q � e j and p, q ∈ S(ρ k ), Update state) (16) ρ k+1 � f[ρ k − η k ∇OF(ρ k )] (η k is the step size of the gradient method, Optimal solution update) (17) if OF(ρ k ) ≤ OF(ρ * ) then (18) ρ * � ρ k (19) h � 0 (20) else (21) h � h + 1 (22) end if (23) end while (Return the optimal solution z * ) (24) Return z * � arg min z k OF(z k ) ALGORITHM 1: e Surrogate Method.…”