Automated, invariant object detection has proven itself to be a substantial challenge for the artificial intelligence research community. In computer vision, many different benchmarks have been established using whole-image classification based on datasets that are too small to eliminate statistical artifacts. As an alternative, we used a new dataset consisting of ~62GB (on the order of 40,000 2Mpixel frames) of compressed high-definition aerial video, which we employed for both object classification and localization. Our algorithms mimic the processing pathways in primate visual cortex, exploiting color/texture, shape/form and motion. We then combine the data using a clustering technique to produce a final output in the form of labeled bounding boxes around objects of interest in the video. Localization adds additional complexity not generally found in whole-image classification problems. Our results are evaluated qualitatively and quantitatively using a scoring metric that assessed the overlap between our detections and ground-truth.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.