The multinomial logit model is a standard approach for determining the probability of purchase in product line problems. When the purchase probabilities are multiplied by product contribution margins, the resulting profit function is generally nonconcave. Because of this, standard nonlinear search procedures may terminate at a local optimum which is far from the global optimum. We present a simple procedure designed to alleviate this problem. The key idea of this procedure is to find a "path" of prices from the global optimum of a related, but concave logit profit function, to the global optimum of the true (but nonconcave) logit profit function.optimization, logit, pricing, product line modeling
Fourier-Motzkin elimination is a projection algorithm for solving finite linear programs. We extend Fourier-Motzkin elimination to semi-infinite linear programs which are linear programs with finitely many variables and infinitely many constraints. Applying projection leads to new characterizations of important properties for primal-dual pairs of semi-infinite programs such as zero duality gap, feasibility, boundedness, and solvability. Extending the Fourier-Motzkin elimination procedure to semi-infinite linear programs yields a new classification of variables that is used to determine the existence of duality gaps. In particular, the existence of what the authors term dirty variables can lead to duality gaps. Our approach has interesting applications in finite-dimensional convex optimization. For example, sufficient conditions for a zero duality gap, such as the Slater constraint qualification, are reduced to guaranteeing that there are no dirty variables. This leads to completely new proofs of such sufficient conditions for zero duality.
Finite-dimensional linear programs satisfy strong duality (SD) and have the "dual pricing" (DP) property. The (DP) property ensures that, given a sufficiently small perturbation of the right-hand-side vector, there exists a dual solution that correctly "prices" the perturbation by computing the exact change in the optimal objective function value. These properties may fail in semi-infinite linear programming where the constraint vector space is infinite dimensional. Unlike the finite-dimensional case, in semi-infinite linear programs the constraint vector space is a modeling choice. We show that, for a sufficiently restricted vector space, both (SD) and (DP) always hold, at the cost of restricting the perturbations to that space. The main goal of the paper is to extend this restricted space to the largest possible constraint space where (SD) and (DP) hold. Once (SD) or (DP) fail for a given constraint space, then these conditions fail for all larger constraint spaces. We give sufficient conditions for when (SD) and (DP) hold in an extended constraint space. Our results require the use of linear functionals that are singular or purely finitely additive and thus not representable as finite support vectors. The key to understanding these linear functionals is the extension of the Fourier-Motzkin elimination procedure to semi-infinite linear programs.
Absmct-Consider the problem of a constrained Markov Decision Process (MDP). Under a parameterization of the control strategies, the problem can be transformed into a non-linear optimization problem with non-linear constraints. Both the cost and the Constraints are stationary averages. We m u m e that the transition probabilities of the underlying Markov chain are unknown: only the values of the control variables are known, as well as the instantaneous values of the cost and the constraints, so no analytical expression for the slationaiy averages is available. To find the solution to the optimization problem, a stochastic version of a primalldual method with an augmented Lagrangian is used. The updating scheme uses a "measure valued" estimator of the gradients that can be interpreted in terms of a finite horizon version of the Perturbation Analysis (PA) method known as the "perturbation realization factors". Most finite horizon derivative estimators are consistent as the sample size grows, so it is common to assume that large enough samples can be observed so as to make the bias negligible. This paper deals with the acfual implementations of the gradient estimators in finite horizon with small sample sizes, so that the iterates of the stochastic approximation can be performed very oflen, as would be required for on-line learning. We identify the asymptotic bias of the stochastic approximation for the constrained optimization method, and by so doing we propose several means to correct it. As is very common with these problems, the bias correction introduces a coflict between precision and speed: the smaller the h i m , the slower the reaction t i e .In the sequel, we present the theoretical basis for the study of bias and learning rate. Our experimenlal results indicate that smoothing at a faster time scale may not be necessary at all, only at a slower time scale. We include results where the algorithms have to track changes in the environment.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.