Unmanned combat aerial vehicles (i.e., drones), are changing the modern geopolitical stage's surveillance, security, and conflict landscape. Various technologies and solutions can help track drones; each technology has different advantages and limitations concerning drone size and detection range. Machine learning (ML) can automatically detect and track drones in real-time while superseding human-level accuracy and providing enhanced situational awareness. Unfortunately, ML's power depends on the data's quality and quantity. In the drone detection task scenario, limited datasets provide limited environmental variation, view angle, view distance, and drone type. We developed a customizable software tool called DyViR that generates large synthetic video datasets for training machine learning algorithms in aerial threat object detection. These datasets contain video and audio renderings of aerial objects within user-specified dynamic simulated biomes (i.e., arctic, desert, and forest). Users can alter the environment on a timeline allowing changes to behaviors such as drone flight patterns and weather conditions across a synthetically generated dataset. DyViR supports additional controls such as motion blur, anti-aliasing, and fully dynamic moving cameras to produce imagery across multiple viewing angles. Each aerial object's classification (drone or airplane) and bounding box data automatically exports to a comma-separated-value (CSV) file and a video to form a synthetic dataset. We demonstrate the value of DyViR by training a real-time YOLOv7-tiny model on these synthetic datasets. The performance of the object detection model improved by 60.4% over its counterpart not using DyViR. This result suggests a use-case of synthetic datasets to surmount the lack of real-world training data for aerial threat object detection.