Airborne tracking and identification (ID) of high value ground targets is a difficult task impacted by sensor, target, and environmental conditions. Layered sensing, using a combination of standoff and short-range sensors, maintains target track and identification in cluttered environments such as cities or densely vegetated areas through sensor diversity. Data, feature, decision, or information fusion is necessary for high confidence target classification to be achieved using multiple sensors and sensor modalities. Target identification performance is improved by exploiting the extra information gained from independent sensing modalities through information fusion for automatic target recognition (ATR).
Increased target ID has been demonstrated using spatial-temporal multi-look sensor fusion and decision level fusion. To further enhance target ID performance and increase decision confidence, feature level fusion techniques are being investigated. A fusion performance model for feature level fusion was applied to a combination of sensor types and features to provide estimates of a fusion gain. This paper presents a fusion performance gain for Synthetic Aperture Radar (SAR), electro-optical (EO), and infrared (IR) video stationary target identification.Keywords: decision level fusion, feature level fusion, electro-optical (EO), infrared (IR), synthetic aperture radar (SAR), information fusion, automatic target recognition (ATR), national imagery interpretability rating scale (NIIRS).