The algorithm ACT (Adaptive Color Tracker) to track objects by a moving video camera is presented. One of the features of the algorithm is the adaptation of the feature set of the tracked object to the background of the current frame. At each step, the algorithm extracts from the object features those that are more specific to the object and at the same time are at least specific to the current frame background, since the rest of the object features not only do not contribute to the separation of the tracked object from the background, but also impede its correct detection. The features of the object and background are formed based on the color representations of scenes. They can be computed in two ways. The first way is 3D-color vectors of the clustered image of the object and the background by a fast version of the well-known k-means algorithm. The second way consists in simpler and faster partitioning of the RGB-color space into 3D-parallelepipeds and subsequent replacement of the color of each pixel with the average value of all colors belonging to the same parallelepiped as the pixel color. Another specificity of the algorithm is its simplicity, which allows it to be used on small mobile computers, such as the Jetson TXT1 or TXT2.The algorithm was tested on video sequences captured by various camcorders, as well as by using the well-known TV77 data set, containing 77 different tagged video sequences. The tests have shown the efficiency of the algorithm. On the test images, its accuracy and speed overcome the characteristics of the trackers implemented in the computer vision library OpenCV 4.1.
An autonomous visual navigation algorithm is considered, designed for “home“ return of unmanned aerial vehicle (UAV) equipped with on-board video camera and on-board computer, out of GPS and GLONASS navigation signals. The proposed algorithm is similar to the well-known visual navigation algorithms such as V-SLAM (simultaneous localization and mapping) and visual odometry, however, it differs in separate implementation of mapping and localization processes. It calculates the geographical coordinates of the features on the frames taken by on-board video camera during the flight from the start point until the moment of GPS and GLONASS signals loss. After the loss of the signal the return mission is launched, which provides estimation of the position of UAV relatively the map created by previously found features. Proposed approach does not require such complex calculations as V-SLAM and does not accumulate errors over time, in contrast to visual odometry and traditional methods of inertial navigation. The algorithm was implemented and tested with use of DJI Phantom 3 Pro quadcopter.
The fast multilevel algorithm to cluster color images (MACC – Multilevel Algorithm for Color Clustering) is presented. Currently, several well-known algorithms of image clustering, including the k‑means algorithm (which is one of the most commonly used in data mining) and its fuzzy versions, watershed, region growing ones, as well as a number of new more complex neural network and other algorithms are actively used for image processing. However, they cannot be applied for clustering large color images in real time. Fast clustering is required, for example, to process frames of video streams shot by various video cameras or when working with large image databases. The developed algorithm MACC allows the clustering of large images, for example, FullHD size, on a personal computer with an average deviation from the original color values of about five units in less than 20 milliseconds, while a parallel version of the classical k‑means algorithm performs the clustering of the same images with an average error of more than 12 units for a time exceeding 2 seconds. The proposed algorithm of multilevel color clustering of images is quite simple to implement. It has been extensively tested on a large number of color images.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.