In this paper, we propose a method to easily calibrate multiple Kinect V2 sensors. It requires the cameras to simultaneously observe a 1D object shown at different orientations (three at least) or a 2D object for at least one acquisition. This is possible due to the built-in coordinate mapping capabilities of the Kinect. Our method follows five steps: image acquisition, pre-calibration, point cloud matching, intrinsic parameters initialization, and final calibration. We modeled radial and distortion parameters of all the cameras, obtaining a root mean square reprojection error of 0.2 pixels on the depth cameras and 0.4 pixels on the color cameras. To validate the calibration results we performed point cloud fusion with color and 3D reconstruction using the depth and color information from four Kinect sensors.Key words: Camera calibration, Kinect V2 system, Multiple cameras Umjeravanje Kinect V2 sustava s više kamera. U ovom je radu predložena metoda za jednostavno umjeravanje proizvoljnog broja senzora Kinect V2. Izvodi se istovremenim snimanjem objekta s više kamera. Jednodimenzionalan objekt potrebno je snimiti s najmanje 3 različite orijentacije, a dvodimenzionalan s najmanje jedne orijentacije. Istovremeno snimanje s više kamera moguće je zahvaljujući integriranom mapiranju koordinata u Kinect sustavu. Predložena metoda izvodi se u pet koraka: akvizicija slike, predumjeravanje, uskladivanje oblaka točaka, inicijalizacija intrinzičnih parametara i konačno umjeravanje. U radu su modelirani radijalni i distorzijski parametri svih kamera, pričemu se ostvaruje korijen srednje kvadratične pogreške ponovne projekcije iznosa 0:2 piksela na kamerama dubine i 0:4 piksela na kamerama u boji. Za validaciju rezultata umjeravanja provedena je fuzija oblaka točaka s rekonstrukcijom trodimenzionalnog objekta i boje korištenjem informacije o dubini i boji š cetiri Kinect senzora.
This paper presents an algorithm with low computational complexity for classifying and recognizing characters based on a random sampling and high-dimensional binary spaces for the development of real-time applications. Character classification is performed using uniform random sampling as the feature selection process, subsequently performing encoding as binary strings.Associative memories are commonly used as general classifiers with linear criteria to discriminate between data points. In most classifiers, the ability to efficiently detect class membership depends entirely on the expressiveness of the attributes used to encode the data. Each binary pattern encodes the distinct characteristics of several glyphs. Character features are represented as elements of a high-dimensional binary space, where a criterion of the cluster is defined under the L 1 metric. The reduction in computational complexity is analyzed. The reduction in the number of character features through random sampling techniques makes it feasible to manage all the character information in physical architectures; therefore, this approach might use resources on a hardware platform with integer operators typically implemented at the hardware-register level. Finally, this approach is implemented in a parallel architecture Field-Programmable Gate Array (FPGA) and tested using a Database (DB) of different fonts, including distortions, therein showing that the efficiency is comparable to the other well-known approaches. KEYWORDS associative memory, Lernmatrix, OCR, real time, SteinbuchRecently, character recognition systems have become an essential field of research because of diverse applications, including handwritten character recognition 1,2 for license plate recognition, 3 address and zip code recognition, 4 and handwritten recognition. A typical character recognition system consists of the following components: image acquisition, segmentation, feature extraction, and classification. Various methods have been proposed for recognizing characters, such as Int J Circ Theor Appl. 2018;46: 1723-1732.wileyonlinelibrary.com/journal/cta
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.