Testing and evaluation of an automotive perception system is a complicated task which requires special equipment and infrastructure. To compute key performance indicators and compare the results with real-world situation, some additional sensors and manual data labelling are often required. In this article, we propose a different approach, which is based on a UAV equipped with a 4K camera flying above a test track. Two computer vision methods are used to precisely determine the positions of the objects around the car – one based on ArUco markers and the other on a DCNN (we provide the algorithms used on GitHub). The detections are then correlated with the perception system readings. For the static and dynamic experiments, the differences between various systems are mostly below 0.5 m. The results of the experiments performed indicate that this approach could be an interesting alternative to existing evaluation solutions.
In this paper we present a vision based hardware-software control
system enabling autonomous landing of a mul-tirotor unmanned aerial
vehicle (UAV). It allows the detection of a marked landing pad in
real-time for a 1280 x 720 @ 60 fps video stream. In addition, a LiDAR
sensor is used to measure the altitude above ground. A heterogeneous
Zynq SoC device is used as the computing platform. The solution was
tested on a number of sequences and the landing pad was detected with
96% accuracy. This research shows that a reprogrammable heterogeneous
computing system is a good solution for UAVs because it enables
real-time data stream processing with relatively low energy consumption.
The information about optical flow, i.e., the movement of pixels between two consecutive images from a video sequence, is used in many vision systems, both classical and those based on deep neural networks. In some robotic applications, e.g., in autonomous vehicles, it is necessary to calculate the flow in real time. This represents a challenging task, especially for high-resolution video streams. In this work, two gradient-based algorithms—Lucas–Kanade and Horn–Schunck—were implemented on a ZCU 104 platform with Xilinx Zynq UltraScale+ MPSoC FPGA. A vector data format was used to enable flow calculation for a 4K (Ultra HD, 3840 × 2160 pixels) video stream at 60 fps. In order to detect larger pixel displacements, a multi-scale approach was used in both algorithms. Depending on the scale, the calculations were performed for different data formats, allowing for more efficient processing by reducing resource utilisation. The presented solution allows real-time optical flow determination in multiple scales for a 4K resolution with estimated energy consumption below 6 W. The algorithms realised in this work can be a component of a larger vision system in advanced surveillance systems or autonomous vehicles.
One of the problems encountered in the field of computer vision and video data analysis is the extraction of information from low-contrast images. This problem can be addressed in several ways, including the use of histogram equalisation algorithms. In this work, a method designed for this purpose—the Contrast-Limited Adaptive Histogram Equalization (CLAHE) algorithm—is implemented in hardware. An FPGA platform is used for this purpose due to the ability to run parallel computations and very low power consumption. To enable the processing of a 4K resolution (UHD, 3840 × 2160 pixels) video stream at 60 fps (frames per second) by using the CLAHE method, it is necessary to use a vector data format and process multiple pixels simultaneously. The algorithm realised in this work can be a component of a larger vision system, such as in autonomous vehicles or drones, but it can also support the analysis of underwater, thermal, or medical images both by humans and in an automated system.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.