We consider solving the low rank matrix sensing problem with Factorized Gradient Descend (FGD) method when the true rank is unknown and over-specified, which we refer to as over-parameterized matrix sensing. If the ground truth signal X * ∈ R d * d is of rank r, but we try to recover it using FF where F ∈ R d * k and k > r, the existing statistical analysis falls short, due to a flat local curvature of the loss function around the global maxima. By decomposing the factorized matrix F into separate column spaces to capture the effect of extra ranks, we show that F t F t − X * 2 F converges to a statistical error of Õ kdσ 2 /n after Õ( σr σ n d ) number of iterations where F t is the output of FGD after t iterations, σ 2 is the variance of the observation noise, σ r is the r-th largest eigenvalue of X * , and n is the number of sample. Our results, therefore, offer a comprehensive picture of the statistical and computational complexity of FGD for the over-parameterized matrix sensing problem.
Learning a near optimal policy in a partially observable system remains an elusive challenge in contemporary reinforcement learning. In this work, we consider episodic reinforcement learning in a reward-mixing Markov decision process (MDP). There, a reward function is drawn from one of multiple possible reward models at the beginning of every episode, but the identity of the chosen reward model is not revealed to the agent. Hence, the latent state space, for which the dynamics are Markovian, is not given to the agent. We study the problem of learning a near optimal policy for two reward-mixing MDPs. Unlike existing approaches that rely on strong assumptions on the dynamics, we make no assumptions and study the problem in full generality. Indeed, with no further assumptions, even for two switching reward-models, the problem requires several new ideas beyond existing algorithmic and analysis techniques for efficient exploration. We provide the first polynomial-time algorithm that finds an ǫ-optimal policy after exploring Õ(poly(H, ǫ −1 ) • S 2 A 2 ) episodes, where H is time-horizon and S, A are the number of states and actions respectively. This is the first efficient algorithm that does not require any assumptions in partially observed environments where the observation space is smaller than the latent state space.Algorithm 1 Learning Two Reward-Mixture MDPs 1: Run pure-exploration (Algorithm 2) to estimate second-order correlations 2: Estimate 2RM-MDP parameters M from the collected data (Algorithm 3) 3: Return π, the (approximately) optimal policy of M P m the probability of any event measured in the m th context (or in m th MDP). If probability of an event depends on a policy π, we add superscript π to P. We denote V π M as an expected long-term reward for model M with policy π. We use • to denote empirical counterparts. For any set A and d ∈ N, A d is a d-ary Cartesian product over A. We use a ∨ b to refer max(a, b) and a ∧ b to mean min(a, b) for a, b ∈ R. Specifically for M = 2: We use shorthand p m (x) := R m (r = 1|x) for m = 1, 2. Let an expected averaged reward p + (x) := 1 2 (p 1 (x) + p 2 (x)), differences in rewards p − (x) := 1 2 (p 1 (x) − p 2 (x)) and ∆(x) := |p − (x)|. AlgorithmsBefore developing a learning algorithm for 2RM-MDP, let us provide intuition behind our algorithm. At a high-level, our approach lies on the following observation:The latent reward model of 2RM-MDP can be recovered from reward correlations and the averaged reward.
We consider stochastic unconstrained bilevel optimization problems when only the first-order gradient oracles are available. While numerous optimization methods have been proposed for tackling bilevel problems, existing methods either tend to require possibly expensive calculations involving Hessians of lower-level objectives, or lack rigorous finite-time performance guarantees. In this work, we propose a Fully First-order Stochastic Approximation (F 2 SA) method, and study its non-asymptotic convergence properties. Specifically, we show that F 2 SA converges to an -stationary solution of the bilevel problem after −7/2 , −5/2 , or −3/2 iterations (each iteration using O(1) samples) when stochastic noises are in both level objectives, only in the upper-level objective, or not present (deterministic settings), respectively. We further show that if we employ momentum-assisted gradient estimators, the iteration complexities can be improved to −5/2 , −4/2 , or −3/2 , respectively. We demonstrate the superior practical performance of the proposed method over existing second-order based approaches on MNIST data-hypercleaning experiments.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.