In this study, we explore the biologically-inspired Learn-On-The-Fly (LOTF) method that actively learns and discovers patterns with improvisation and sensory intelligence, including pheromone trails, structure from motion, sensory fusion, sensory inhibition, and spontaneous alternation. LOTF is related to classic online modeling and adaptive modeling methods. However, it aims to solve more comprehensive, ill-structured problems such as human activity recognition from a drone video in a disastrous environment. It helps to build explainable AI models that enable human-machine teaming with visual representation, visual reasoning, and machine vision. It is anticipated that LOTF would have an impact on Artificial Intelligence, video analytics for searching and tracking survivors' activities for humanitarian assistance and disaster relief (HADR), field augmented reality, and field robotic swarms.