Abstract:The cross-entropy method is a recent versatile Monte Carlo technique. This article provides a brief introduction to the cross-entropy method and discusses how it can be used for rare-event probability estimation and for solving combinatorial, continuous, constrained and noisy optimization problems. A comprehensive list of references on cross-entropy methods and applications is included.Keywords: cross-entropy, Kullback-Leibler divergence, rare events, importance sampling, stochastic search.The cross-entropy (CE) method is a recent generic Monte Carlo technique for solving complicated simulation and optimization problems. The approach was introduced by R.Y. Rubinstein in [41,42], extending his earlier work on variance minimization methods for rare-event probability estimation [40].The CE method can be applied to two types of problem:, where X is a random variable or vector taking values in some set X and H is function on X . An important special case is the estimation of a probability = P(S(X) γ), where S is another function on X .2. Optimization: Optimize (that is, maximize or minimize) S(x) over all x ∈ X , where S is some objective function on X . S can be either a known or a noisy function. In the latter case the objective function needs to be estimated, e.g., via simulation.In the estimation setting, the CE method can be viewed as an adaptive importance sampling procedure that uses the cross-entropy or Kullback-Leibler divergence as a measure of closeness between two sampling distributions, as is explained further in Section 1. In the optimization setting, the optimization problem is first translated into a rare-event estimation problem, and then the CE method for estimation is used as an adaptive algorithm to locate the optimum, as is explained further in Section 2.An easy tutorial on the CE method is given in [15]. A more comprehensive treatment can be found in [45]; see also [46, Chapter 8]. The CE method homepage can be found at www.cemethod.org .The CE method has been successfully applied to a diverse range of estimation and optimization problems, including buffer allocation [1], queueing models of telecommunication systems [14,16], optimal control of HIV/AIDS spread [48,49], signal detection [30], combinatorial auctions [9], DNA sequence alignment [24,38], scheduling and vehicle routing [3,8,11,20,23,53], neural and reinforcement learning [31,32,34,52,54], project management [12], rare-event simulation with light-and heavy-tail distributions [2,10,21,28], clustering analysis [4,5,29]. Applications to classical combinatorial optimization problems including the max-cut, traveling salesman, and Hamiltonian cycle 1
Tilted-axis rotation, arising from Fermi-aligned configurations, has been observed for the first time to cause backbending in an odd-proton nucleus. In 181 Re, two t-bands are found to be energetically favored relative to the usual rotation-aligned s-bands, presenting an alternative form of cold nuclear rotation. Interactions between the bands are weak, and unambiguous comparisons with tilted-axiscranking calculations can be made. [S0031-9007(97)03669-7]
Tilted-axis rotation, arising from Fermi-aligned configurations, has been observed for the first time to cause backbending in an odd-proton nucleus. In 181 Re, two t-bands are found to be energetically favored relative to the usual rotation-aligned s-bands, presenting an alternative form of cold nuclear rotation. Interactions between the bands are weak, and unambiguous comparisons with tilted-axiscranking calculations can be made. [S0031-9007(97)03669-7]
We consider support vector machines for binary classification. As opposed to most approaches we use the number of support vectors (the "L 0 norm") as a regularizing term instead of the L 1 or L 2 norms. In order to solve the optimization problem we use the cross entropy method to search over the possible sets of support vectors. The algorithm consists of solving a sequence of efficient linear programs. We report experiments where our method produces generalization errors that are similar to support vector machines, while using a considerably smaller number of support vectors.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.