Drone aerial videos offer a promising future in modern digital media and remote sensing applications, but effectively tracking several objects in these recordings is difficult. Drone aerial footage typically includes complicated sceneries with moving objects, such as people, vehicles, and animals. Complicated scenarios such as large-scale viewing angle shifts and object crossings may occur simultaneously. Random finite sets are mixed in a detection-based tracking framework, taking the object’s location and appearance into account. It maintains the detection box information of the detected object and constructs the Box-MeMBer object position prediction framework based on the MeMBer random finite set point object tracking. We develop a hierarchical connection structure in the OSNet network, build MB-OSNet to get the object appearance information, and connect feature maps of different levels through the hierarchy such that the network may obtain rich semantic information at different sizes. Similarity measurements are computed and collected for all detections and trajectories in a cost matrix that estimates the likelihood of all possible matches. The cost matrix entries compare the similarity of tracks and detections in terms of position and appearance. The DB-Tracker algorithm performs excellently in multi-target tracking of drone aerial videos, achieving MOTA of 37.4% and 46.2% on the VisDrone and UAVDT data sets, respectively. DB-Tracker achieves high robustness by comprehensively considering the object position and appearance information, especially in handling complex scenes and target occlusion. This makes DB-Tracker a powerful tool in challenging applications such as drone aerial videos.