Recognition systems in the remote sensing domain often operate in "open-world" environments, where they must be capable of accurately classifying data from the indistribution categories while simultaneously detecting and rejecting anomalous/out-of-distribution (OOD) inputs. However, most modern designs use Deep Neural Networks (DNNs) to perform this recognition function that are trained under "closedworld" assumptions in offline-only environments. As a result, by construction these systems are ill-posed to handle anomalous inputs and have no mechanism for improving OOD detection abilities during deployment. In this work, we address these weaknesses from two aspects. First, we introduce advanced DNN training methods to co-design for accuracy and OOD detection in the offline training phase. We then propose a novel "learn-online" workflow for updating the DNNs during deployment using a small library of carefully collected samples from the operating environment. To show the efficacy of our methods we consider experimenting with two popular recognition tasks in remote sensing: scene classification in electro-optical satellite images and automatic target recognition in synthetic aperture radar imagery. In both, we find that our two primary design contributions can individually improve detection performance, while also being complementary. Additionally, we find that detection performance on difficult and highly-granular OOD samples can be drastically improved using only tens or hundreds of samples collected from the environment. Finally, through analysis we determine that the logic for adding/removing samples from the collection library is of key importance and using a proper learning rate during the model update step is critical.