The use of unmanned aircraft in the National Airspace System (NAS) has been characterized as the next great step forward in the evolution of civil aviation. Although use of unmanned aircraft systems (UAS) in military and public service operations is proliferating, civil use of UAS remains limited in the United States today. This report focuses on one particular regulatory challenge: classifying UAS to assign airworthiness standards. Classification is useful for ensuring that meaningful differences in design are accommodated by certification to different standards, and that aircraft with similar risk profiles are held to similar standards. This paper provides observations related to how the current regulations for classifying manned aircraft, based on dimensions of aircraft class and operational aircraft categories, could apply to UAS. This report finds that existing aircraft classes are well aligned with the types of UAS that currently exist; however, the operational categories are more difficult to align to proposed UAS use in the NAS. Specifically, the factors used to group manned aircraft into similar risk profiles do not necessarily capture all relevant UAS risks. UAS classification is investigated through gathering approaches to classification from a broad spectrum of organizations, and then identifying and evaluating the classification factors from these approaches. This initial investigation concludes that factors in addition to those currently used today to group manned aircraft for the purpose of assigning airworthiness standards will be needed to adequately capture risks associated with UAS and their operations.
Over the last decade the United States Government has significantly increased its use of commercial-off-the-shelf (COTS) software as stand-alone solutions and as components in safetycritical systems. This increased use stems from the realization that pre-existing software products can be a means of lowering development costs, shortening development time, and keeping pace with the changing software market. The Federal government, particularly with regards to safetycritical systems, has found that COTS software is currently not plug-and-play, has significant tradeoffs, and usually contains a "cradle-to-grave" dependence on the software manufacturer. Unfortunately, there is currently no standard "best commercial practice" with regard to the acceptance of COTS software. Ad hoc attempts to apply standards for commercial software acceptance have been based upon subjective criteria and have proven to be imprecise and prone to error. Moreover, acceptability in safety-critical systems generally demands analysis of source code. Many software vendors, however, have chosen to keep all such source code proprietary due to liability and intellectual property concerns. When the source code is not available, there is very little that can be done to insure the safety, reliability, and integrity of the software. This paper discusses the COTS Score, an approach that aids in determining the acceptability of COTS software. The process involves the application of predictive techniques used for financial Credit Scoring to the COTS domain. The methodology addresses the issue of acceptability by incorporating both functional and environmental software measures related to reliability, compatibility, certifiability , obsolescence, and life cycle including trade-off analyses. This approach satisfies NASA and IS0 9001 requirements to define an acceptance procedure for COTS software.
Since ~400 BC, when man first replicated flying behavior with kites, up until the turn of the 20th century, when the Wright brothers performed the first successful powered human flight, flight functions have become available to man via significant support from man-made structures and devices. Over the past 100 years or so, technology has enabled several flight functions to migrate to automation and/or decision support systems. This migration continues with the United States' NextGen and Europe's Single European Sky (a.k.a. SESAR) initiatives. These overhauls of the airspace system will be accomplished by accommodating the functional capabilities, benefits, and limitations of technology and automation together with the unique and sometimes overlapping functional capabilities, benefits, and limitations of humans. This paper will discuss how a safe and effective migration of any flight function must consider several interrelated issues, including, for example, shared situation awareness, and automation addiction, or overreliance on automation. A long-term philosophical perspective is presented that considers all of these issues by primarily asking the following questions: How does one find an acceptable level of risk tolerance when allocating functions to automation versus humans? How does one measure or predict with confidence what the risks will be? These two questions and others will be considered from the two most-discussed paradigms involving the use of increasingly complex systems in the future: "humans as operators" and "humans as monitors."
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.