2015
DOI: 10.2514/1.i010304
|View full text |Cite
|
Sign up to set email alerts
|

Ground Delay Program Analytics with Behavioral Cloning and Inverse Reinforcement Learning

Abstract: Historical data are used to build two types of models that predict Ground Delay Program implementation decisions and produce insights into how and why those decisions are made. More specifically, behavioral cloning and inverse reinforcement learning models are built that predict hourly Ground Delay Program implementation at Newark Liberty International and San Francisco International airports. Data available to the models include actual and scheduled air traffic metrics and observed and forecasted weather cond… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 23 publications
(6 citation statements)
references
References 21 publications
0
6
0
Order By: Relevance
“…This includes a preprocessing step, which involves generating an expert dataset through supervised training based on the possible outcome of the guided beams given that the pose of tracking (feedback) is known to the user. The action is based on learning a policy through behavior cloning [ 127 ] by mapping the multi-sensory information with the respective optimal beam based on the pre-trained dataset. The reward is then determined from the action to select the best scheduling policy for a low-latency data delivery.…”
Section: Discussion and Future Scopementioning
confidence: 99%
“…This includes a preprocessing step, which involves generating an expert dataset through supervised training based on the possible outcome of the guided beams given that the pose of tracking (feedback) is known to the user. The action is based on learning a policy through behavior cloning [ 127 ] by mapping the multi-sensory information with the respective optimal beam based on the pre-trained dataset. The reward is then determined from the action to select the best scheduling policy for a low-latency data delivery.…”
Section: Discussion and Future Scopementioning
confidence: 99%
“…Various state-of-the-art classification techniques for different T-wave patterns are sourced from the existing literature [58,59]. In the construction of combo pack of solution for better classification of T wave anomalies, firstly designed or identified the best possible research questions after reviewing the literature [5,[50][51][52][53]. These research questions are analyzed in a systematic way and then report the solutions to these questions in a combo pack solution (better classification and visibility of T wave anomalies).…”
Section: Methodological Comparisonmentioning
confidence: 99%
“…This SLR is a joint alignment of different behaviors of different T wave episodes along with a discussion of T wave dependencies analysis. Finally, the usage of the above combination with neural models works for the achievement of robust and accurate classification of different T waves [50][51][52][53][54]. In the context of accurate and robust classification, the proposed idea is executed with the query string generation method.…”
Section: Abstract For Searchingmentioning
confidence: 99%
See 1 more Smart Citation
“…To this end, it would be more beneficial to develop data-based guidance algorithms for guidance problems that suffer from uncertainties, e.g., target movement and aerodynamic force. Considering the properties of the guidance problem, leveraging the reinforcement learning (RL) concept might be most appropriate for developing a data-based guidance algorithm [15,16]. Previous works using RL to solve control problems mainly focused on the applications of robotics, with few works addressing aerospace guidance problems.…”
Section: Introductionmentioning
confidence: 99%