2021
DOI: 10.3390/app11199191
|View full text |Cite
|
Sign up to set email alerts
|

Data-Driven Reinforcement-Learning-Based Automatic Bucket-Filling for Wheel Loaders

Abstract: Automation of bucket-filling is of crucial significance to the fully automated systems for wheel loaders. Most previous works are based on a physical model, which cannot adapt to the changeable and complicated working environment. Thus, in this paper, a data-driven reinforcement-learning (RL)-based approach is proposed to achieve automatic bucket-filling. An automatic bucket-filling algorithm based on Q-learning is developed to enhance the adaptability of the autonomous scooping system. A nonlinear, non-parame… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
7
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 11 publications
(7 citation statements)
references
References 23 publications
0
7
0
Order By: Relevance
“…Q-learning has also been applied to the bucket-filling problem. This was achieved by removing the physical model of the wheel loader and instead using a statistical model predicting the state of the wheel loader at some time step [34]. From this model, an agent was trained to perform the bucket-filling process.…”
Section: Bucket-fillingmentioning
confidence: 99%
See 1 more Smart Citation
“…Q-learning has also been applied to the bucket-filling problem. This was achieved by removing the physical model of the wheel loader and instead using a statistical model predicting the state of the wheel loader at some time step [34]. From this model, an agent was trained to perform the bucket-filling process.…”
Section: Bucket-fillingmentioning
confidence: 99%
“…Some previous work has explicitly leveraged FSMs within their solution; alternatively, the solution can be described as an FSM, for example, decomposing the scooping task into fuzzy behaviors using finite-state machines [22], flow charts of modeling and action [34], or step-based solution [6].…”
Section: Introductionmentioning
confidence: 99%
“…In Fernando et al (2019), the method from the previous paper is explained with more detail, while the four‐step‐state machine from Dobson et al (2017) is added to the method. In Huang et al (2021), an automatic bucket‐filling algorithm based on Q‐learning, a reinforcement learning algorithm, is proposed. The state representation used considers the velocity, the tilt cylinder pressure, and the lift cylinder pressure.…”
Section: Problem Description and Related Workmentioning
confidence: 99%
“…The different described works use different sensor setups. In Huang et al (2021), a global positioning system (GPS) attached to the wheel loader is used. In Hejase and Ozguner (2022) several simulated sensors are considered (GPS, stereo cameras, LIDAR, and RADAR), but only the GPS is used by the system.…”
Section: Problem Description and Related Workmentioning
confidence: 99%
See 1 more Smart Citation