2022
DOI: 10.1609/aiide.v18i1.21959
|View full text |Cite
|
Sign up to set email alerts
|

A Review of Uncertainty for Deep Reinforcement Learning

Abstract: Uncertainty is ubiquitous in games, both in the agents playing games and often in the games themselves. Working with uncertainty is therefore an important component of successful deep reinforcement learning agents. While there has been substantial effort and progress in understanding and working with uncertainty for supervised learning, the body of literature for uncertainty aware deep reinforcement learning is less developed. While many of the same problems regarding uncertainty in neural networks for supervi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
11
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 22 publications
(11 citation statements)
references
References 17 publications
0
11
0
Order By: Relevance
“…In the real world, conservation efforts are embedded in dynamic and uncertain processes such as a changing climate and deforestation. The challenge of overcoming system uncertainty is one of the key promises of DRL, as it arises in many sequential decision‐making problems (Lockwood & Si, 2022). The incorporation of recurrent deep learning models into DRL has shown potential to solve partially observable MDPs with greater success than more complicated methods (Ni et al., 2022).…”
Section: Discussionmentioning
confidence: 99%
“…In the real world, conservation efforts are embedded in dynamic and uncertain processes such as a changing climate and deforestation. The challenge of overcoming system uncertainty is one of the key promises of DRL, as it arises in many sequential decision‐making problems (Lockwood & Si, 2022). The incorporation of recurrent deep learning models into DRL has shown potential to solve partially observable MDPs with greater success than more complicated methods (Ni et al., 2022).…”
Section: Discussionmentioning
confidence: 99%
“…• Motivation: Inspired by the recent success of uncertainty-oriented exploration in improving sample efficiency of RL algorithms [13,53], we use an innovative method based on multiple-model adaptive estimation to approximate the environment model and the agent's uncertainty about it. We then incorporate estimated uncertainty about the approximated model with the MB-SF framework to derive a novel form of uncertaintyaware exploration.…”
Section: Contributionsmentioning
confidence: 99%
“…Count-based approaches are also notoriously short-sighted, causing the agent to become stuck in local minima [50]. A compelling alternative direction for exploration is to reduce uncertainty about the environment [14,45,51,52,53,54,55]. The uncertainty in this context refers to epistemic or parametric uncertainty [56], which is provoked by the agent's imperfect knowledge of the environment given limited samples.…”
Section: Introductionmentioning
confidence: 99%
“…Reinforcement learning (RL) has been widely applied in various domains, such as robotics and autonomous vehicles, by creating decision-making agents that interact with their environments and receive reward signals. Despite the successes in many domains, poor sample efficiency during learning makes deploying RL agents in real-world applications unfeasible [1,2]. This challenge becomes even more severe when sample collection is expensive or risky.…”
Section: Introductionmentioning
confidence: 99%
“…This challenge becomes even more severe when sample collection is expensive or risky. One promising approach for improving sample efficiency is uncertainty-aware exploration, which uses uncertainty in both the agent and the environment [2,3]. The uncertainty in the agent, known as epistemic uncertainty, arises from the agent's imperfect knowledge about the environment.…”
Section: Introductionmentioning
confidence: 99%