This paper presents two problem formulations for scheduling the maintenance of a fighter aircraft fleet under conflict operating conditions. In the first formulation, the average availability of aircraft is maximized by choosing when to start the maintenance of each aircraft. In the second formulation, the availability of aircraft is preserved above a specific target level by choosing to either perform or not perform each maintenance activity. Both formulations are cast as semi-Markov decision problems (SMDPs) that are solved using reinforcement learning (RL) techniques. As the solution, maintenance policies dependent on the states of the aircraft are obtained. Numerical experiments imply that RL is a viable approach for considering conflict time maintenance policies. The obtained solutions provide knowledge of efficient maintenance decisions and the level of readiness that can be maintained by the fleet.
Simulation-based production scheduling approaches are emerging as altematives to optimization and simpler approaches such as priority d e s . This paper presents an application of a simulation-based fmite scheduling at Albany In-tematio~l, the largest manufacturer of paper machine clothing in the world Simulation is used as a decision suppori twl for manual schedule creation. User experiences have been encouraging. We argue that an optimization-based approach is not necessarily the most economical and identify a number of tentative key enablers of a simulation-based solution. The case indicates that a simulation-based solutiou is a viable option when the production process does not include combination of materials and local sequencing is adequate. A simulation-based solution capitaliis on t h i s existing s o w e of tacit knowledge by giving expert human schcdulers tools for testing and improving schedulff.
Fighter aircraft are typically maintained periodically on the basis of cumulated usage hours. In a fleet of aircraft, the timing of the maintenance therefore depends on the allocation of flight time. A fleet with limited maintenance resources is faced with a design problem in assigning the aircraft to flight missions so that the overall amount of maintenance needs will not exceed the maintenance capacity. We consider the assignment of aircraft to flight missions as a Markov Decision Problem over a finite time horizon. The average availability of aircraft is taken as the optimization criterion. An efficient assignment policy is solved using a Reinforcement Learning technique called Q-learning. We compare the performance of the Q-learning algorithm to a set of heuristic assignment rules using problem instances that involve varying number of aircraft and types of periodic maintenance. Moreover, we consider the possibilities of practical implementation of the produced solutions. 2373 1-4244-1306-0/07/$25.00 ©2007 IEEE
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.