Exploration is widely regarded as one of the most challenging aspects of reinforcement learning (RL), with many naive approaches succumbing to exponential sample complexity. To isolate the challenges of exploration, we propose a new "reward-free RL" framework. In the exploration phase, the agent first collects trajectories from an MDP M without a pre-specified reward function. After exploration, it is tasked with computing near-optimal policies under for M for a collection of given reward functions. This framework is particularly suitable when there are many reward functions of interest, or when the reward function is shaped by an external agent to elicit desired behavior.We give an efficient algorithm that conducts Õ(S 2 Apoly(H)/ǫ 2 ) episodes of exploration and returns ǫ-suboptimal policies for an arbitrary number of reward functions. We achieve this by finding exploratory policies that visit each "significant" state with probability proportional to its maximum visitation probability under any possible policy. Moreover, our planning procedure can be instantiated by any black-box approximate planner, such as value iteration or natural policy gradient. We also give a nearly-matching Ω(S 2 AH 2 /ǫ 2 ) lower bound, demonstrating the near-optimality of our algorithm in this setting.
Inverted-type polymer light-emitting diodes with Au nanoparticles modified ITO cathode has exhibited improved brightness from 5900 to 15,000 cd m(-2) (1.5-fold enhancement) and enhanced luminous efficiency from 4.4 to 10.5 cd A(-1) (1.4-fold enhancement), when greenish emissive polymer-P-PPV was applied as active layer. Both the experimental and theoretical results show that it is mainly attributed to effective overlapping between local surface plasmon resonance induced by Au nanopartices and excitons quenching region at ZnO/P-PPV interface, which makes originally electrode-quenched excitons emissive and increases excitons efficiency.
A major challenge of multiagent reinforcement learning (MARL) is the curse of multiagents, where the size of the joint action space scales exponentially with the number of agents. This remains to be a bottleneck for designing efficient MARL algorithms even in a basic scenario with finitely many states and actions. This paper resolves this challenge for the model of episodic Markov games. We design a new class of fully decentralized algorithms-V-learning, which provably learns Nash equilibria (in the two-player zero-sum setting), correlated equilibria and coarse correlated equilibria (in the multiplayer general-sum setting) in a number of samples that only scales with max i∈[m] A i , where A i is the number of actions for the i th player. This is in sharp contrast to the size of the joint action space which is m i=1 A i . V-learning (in its basic form) is a new class of single-agent RL algorithms that convert any adversarial bandit algorithm with suitable regret guarantees into a RL algorithm. Similar to the classical Q-learning algorithm, it performs incremental updates to the value functions. Different from Q-learning, it only maintains the estimates of V-values instead of Q-values. This key difference allows V-learning to achieve the claimed guarantees in the MARL setting by simply letting all agents run V-learning independently.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.