The recent public release of high resolution Synthetic Aperture Radar (SAR) data collected by the DARPA/AFRL Moving and Stationary Target Acquisition and Recognition (MSTAR) program has provided a unique opportunity to promote and assess progress in SAR ATR algorithm development. This paper will suggest general principles to follow and report on a specific ATR performance experiment using these principles and this data. The principles and experiments are motivated by AFRL experience with the evaluation ofthe MSTAR ATR.
Testing a SAR Automatic Target Recognition (ATR) algorithm at or very near its training conditions often yields near perfect results as we commonly see in the literature. This paper describes a series of experiments near and not so near to ATR algorithm training conditions. Experiments are setup to isolate individual Extended Operating Conditions (EOCs) and performance is reported at these points. Additional experiments are setup to isolate specific combinations of EOCs and the SAR ATR algorithm's performance is measured here also. The experiments presented here are a by-product of a DARPA/AFRL Moving and Stationary Target Acquisition and Recognition (MSTAR) program evaluation conducted in November of 1 997. Although the tests conducted here are in the domain of EOCs, these tests do not encompass the "real world" (i.e., what you might see on the battlefield) problem. In addition to performance results this paper describes an evaluation methodology including the Extended Operating Condition concept, as well as, data; algorithm; and figures of merit. In summary, this paper highlights the sensitivity that a baseline Mean Squared Error (MSE) ATR algorithm has to various operating conditions both near and varying degrees away from the training conditions.
Estimates of proportion and rate-based performance measures may involve discrete distributions, small sample sizes, and extreme outcomes. Common methods for uncertainty characterization have limited accuracy in these circumstances. Accurate confidence interval estimators for proportions, rates, and their differences are described and MATLAB programs are made available. The resulting confidence intervals are validated and compared to common methods. The programs search for confidence intervals using an integration of the Bayesian posterior with diffuse priors to measure the confidence level. The confidence interval estimators can find one or two-sided intervals. For two-sided intervals, either minimal-length, balanced-tail probabilities, or balanced-width can be selected.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.