The number of telemetry parameters in a typical spacecraft is constantly increasing. At the same time the number of operators allocated to each spacecraft to check those parameters is constantly decreasing. Techniques such as limit checking are well known but they take time and effort to define, enter and manage as the mission evolves. The result is that the vast majority of telemetry parameters are not limit checked. In 2014, the Advanced Operation Concepts Office at ESA/ESOC decided to see if we could change this by employing Big Data type techniques on the data. The idea was simple, we asked our partner, KU Leuven of Belgium, to define future checks for all telemetry parameters given one year's worth of historical data. No engineering knowledge was provided and the derivation of the checks had to be completely automatic i.e. the checks had to be derived solely on the data itself with no human intervention. The mission we choose was Venus express and the learning period ended just before the aero-braking activities started. We then applied these checks to the following three months of data which included interesting activities such as aero-braking preparation and aero-braking itself. This test data was not provided to KU Leuven until after they had submitted their checks to us for validation. This paper describes KU Leuven's response to this challenge. They decided that in theory every parameter should be checkable and went about developing a statistical approach that could be applied to every parameter. Later a compromise was made when the parameters were split into two groups i.e. discrete parameters (parameters that historically have only taken a limited number of values) and continuous (parameters that have taken on many values in the past). For the former group the team applied a generic technique based on Poincaré-Plots and for the latter a generic technique based on Kernel Density Estimates (KDEs). The work was also expanded to provide checks on unusual changes in KDE as well as real-time checks on individual parameter values. This paper then goes on to describe the validation exercise carried out at ESOC in which the delivered checks were run on the new data and the results compared to actual operational events. After some optimisations, which were required to reduce the level of false negatives to reasonable levels the validation team produced some extremely interesting results creating a very accurate and detailed insight into the future operations. ESOC is currently planning to deploy these techniques operationally for flying spacecraft in the near future.
The number of telemetry parameters in a typical spacecraft is constantly increasing. At the same time the number of operators allocated to each spacecraft to check those parameters is constantly decreasing. Techniques such as limit checking are well known but they take time and effort to define, enter and manage as the mission evolves. The result is that the vast majority of telemetry parameters are not limit checked in real-time. In 2014, the Advanced Operation Concepts Office at ESA/ESOC decided to see if we could change this by employing Big Data type techniques on the data. The idea was simple, we asked our partner, SATE of Italy, to define future checks for all telemetry parameters given one year's worth of historical data. No engineering knowledge was provided and the derivation of the checks had to be completely automatic i.e. the checks had to be derived solely on the data itself with no human intervention. The mission we choose was Venus express (VEX) and the learning period ended just before the aero-braking activities started. We then applied these checks to the following three months of data which included interesting activities such as aero-braking preparation and aero-braking itself. This test data was not provided to SATE until after they had submitted their checks to us for validation. This paper describes SATE's response to this challenge. SATE decided to take a very pragmatic, engineering view of the problem and defined algorithms to search for anything that could be classed as constant in the data. This could be simple features of the data such as average or more exotic features such as harmonic mean, FFT coefficients and features
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.