The widely adaptable capabilities of artificial intelligence, in particular deep learning and computer vision has led to significant research output regarding fire and smoke detection. Previous studies often focus on themes like early fire detection, increased operational awareness, and post-fire assessment. To further test the capabilities of deep learning detection in these scenarios, we collected and labeled a unique aerial image dataset that determined whether specific types of fire behavior could be reliably detected in prescribed fire settings. Our 960 labeled images were sourced from over 20.97 hours of UAS video collected during prescribed fire operations covering a large region of Texas and Louisiana, U.S.. National Wildfire Coordinating Group (NWCG) fire behavior observations and descriptions served as a reference for determining fire behavior classes during labeling. YOLOv8 models were trained on NWCG Rank 1-3 fire behavior descriptions in grassland, shrubland, forested, and combined fire regimes within our study area. Models were first trained and validated on isolated image objects of fire behavior, and then on segmenting fire behavior in their original parent images. Models trained using isolated image objects of fire behavior consistently performed at a mAP of 0.808 or higher, with combined fire regimes producing the best results (mAP = 0.897). Most segmentation models performed relatively poorly, except for the forest regime model at a box and mask mAP of 0.59 and 0.611, respectively. Our results indicate that classifying fire behavior with computer vision is possible in most fire regimes and fuel models, whereas segmenting fire behavior around background information is relatively difficult. However, it may be a manageable task with enough data, and when models are developed for a specific fire regime. With an increasing number of destructive wildfires and new challenges confronting fire managers, identifying how new technologies can quickly assess wildfire situations can assist wildfire responder awareness. Our conclusion is that levels of abstraction deeper than mere detection of smoke or fire are possible using computer vision, and could make even more detailed fire monitoring possible.