The problem of maximizing non-negative monotone submodular functions under a certain constraint has been intensively studied in the last decade. In this paper, we address the problem for functions defined over the integer lattice.Suppose that a non-negative monotone submodular function f : Z n + → R + is given via an evaluation oracle. Assume further that f satisfies the diminishing return property, which is not an immediate consequence of submodularity when the domain is the integer lattice. Given this, we design polynomial-time (1 − 1/e − ǫ)-approximation algorithms for a cardinality constraint, a polymatroid constraint, and a knapsack constraint. For a cardinality constraint, we also provide a (1 − 1/e − ǫ)-approximation algorithm with slightly worse time complexity that does not rely on the diminishing return property.
For an undirected/directed hypergraph G = (V, E), its Laplacian L G : R V → R V is defined such that its "quadratic form" x L G (x) captures the cut information of G. In particular,In this paper, we present a polynomial-time algorithm that, given an undirected/directed hypergraph G on n vertices, constructs an -spectral sparsifier of G with O(n 3 log n/ 2 ) hyperedges/hyperarcs.The proposed spectral sparsification can be used to improve the time and space complexities of algorithms for solving problems that involve the quadratic form, such as computing the eigenvalues of L G , computing the effective resistance between a pair of vertices in G, semisupervised learning based on L G , and cut problems on G. In addition, our sparsification result implies that any submodular function f : 2 V → R + with f (∅) = f (V ) = 0 can be concisely represented by a directed hypergraph. Accordingly, we show that, for any distribution, we can properly and agnostically learn submodular functions f : 2 V → [0, 1] with f (∅) = f (V ) = 0, with O(n 4 log(n/ )/ 4 ) samples.
As is well known, the smallest possible ratio between the spectral norm and the Frobenius norm of an m × n matrix with m ≤ n is 1/ √ m and is (up to scalar scaling) attained only by matrices having pairwise orthonormal rows. In the present paper, the smallest possible ratio between spectral and Frobenius norms of n 1 ×· · ·×n d tensors of order d, also called the best rank-one approximation ratio in the literature, is investigated. The exact value is not known for most configurations of n 1 ≤ · · · ≤ n d . Using a natural definition of orthogonal tensors over the real field (resp., unitary tensors over the complex field), it is shown that the obvious lower bound 1/ √ n 1 · · · n d−1 is attained if and only if a tensor is orthogonal (resp., unitary) up to scaling. Whether or not orthogonal or unitary tensors exist depends on the dimensions n 1 , . . . , n d and the field. A connection between the (non)existence of real orthogonal tensors of order three and the classical Hurwitz problem on composition algebras can be established: existence of orthogonal tensors of size × m × n is equivalent to the admissibility of the triple [ , m, n] to the Hurwitz problem. Some implications for higher-order tensors are then given. For instance, real orthogonal n × · · · × n tensors of order d ≥ 3 do exist, but only when n = 1, 2, 4, 8. In the complex case, the situation is more drastic: unitary tensors of size × m × n with ≤ m ≤ n exist only when m ≤ n. Finally, some numerical illustrations for spectral norm computation are presented.2010 Mathematics Subject Classification. 15A69, 15A60, 17A75.
Submodular function maximization has numerous applications in machine learning and artificial intelligence. Many real applications require multiple submodular objective func-tions to be maximized, and which function is regarded as important by a user is not known in advance. In such cases, it is desirable to have a small family of representative solutions that would satisfy any user’s preference. A traditional approach for solving such a problem is to enumerate the Pareto optimal solutions. However, owing to the massive number of Pareto optimal solutions (possibly exponentially many), it is difficult for a user to select a solution. In this paper, we propose two efficient methods for finding a small family of representative solutions, based on the notion of regret ratio. The first method outputs a family of fixed size with a nontrivial regret ratio. The second method enables us to choose the size of the output family, and in the biobjective case, it has a provable trade-off between the size and the regret ratio. Using real and synthetic data, we empirically demonstrate that our methods achieve a small regret ratio.
We propose a risk-averse statistical learning framework wherein the performance of a learning algorithm is evaluated by the conditional value-at-risk (CVaR) of losses rather than the expected loss. We devise algorithms based on stochastic gradient descent for this framework. While existing studies of CVaR optimization require direct access to the underlying distribution, our algorithms make a weaker assumption that only i.i.d. samples are given. For convex and Lipschitz loss functions, we show that our algorithm has O(1/ √ n)-convergence to the optimal CVaR, where n is the number of samples. For nonconvex and smooth loss functions, we show a generalization bound on CVaR. By conducting numerical experiments on various machine learning tasks, we demonstrate that our algorithms effectively minimize CVaR compared with other baseline algorithms.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.