This paper presents a real-time method to detect and track multiple mobile ground robots using event cameras. The method uses density-based spatial clustering of applications with noise (DBSCAN) to detect the robots and a single k-dimensional (k − d) tree to accurately keep track of them as they move in an indoor arena. Robust detections and tracks are maintained in the face of event camera noise and lack of events (due to robots moving slowly or stopping). An off-the-shelf RGB camera-based tracking system was used to provide ground truth. Experiments including up to 4 robots are performed to study the effect of i) varying DBSCAN parameters, ii) the event accumulation time, iii) the number of robots in the arena, iv) the speed of the robots, v) variation in ambient light conditions on the detection and tracking performance, and vi) the effect of alternative clustering algorithms on detection performance. The experimental results showed 100% detection and tracking fidelity in the face of event camera noise and robots stopping for tests involving up to 3 robots (and upwards of 93% for 4 robots). When the lighting conditions were varied, a graceful degradation in detection and tracking fidelity was observed.
The exponentially growing Internet of Things (IoT) market presents compelling opportunities for innovators to develop and create meaningful applications affecting everyday life. This shifting IoT paradigm is placing new demands on educational institutions to train students with IoT-relevant skills. Numerous education researchers continue to document the implementation of IoT-based curriculum, pedagogy, and assessment techniques for STEM education. In this paper, we systematically reviewed this literature and identified 60 journal articles and conference papers that have reported implementing IoT curriculum and associated instructional approaches, educational technologies, and assessment strategies for K-12 and university students. The curriculum, pedagogy, and assessment strategies presented in these studies were analyzed in the context of the sensing, networking, services, and interface layers of the IoT technology paradigm. The review identifies best educational practices and synthesizes actionable strategies for educators to implement effective IoT learning experiences for students. These strategies leverage low-cost IoT hardware, open-source IoT software, active learning-based instructional approaches, and direct/indirect assessment methods. As IoT education becomes increasingly pervasive, the review and strategies provided in this paper may serve as a guide for future educational efforts.
Motion capture systems are crucial in developing multi-quadrotor systems due to their ability to provide fast and accurate ground truth measurements for tracking and control. This paper presents the implementation details and experimental validation of a relatively low-cost motion-capture system for multi-quadrotor motion planning using an event camera. The real-time, multi-quadrotor detection and tracking tasks are performed using a deep learning network You-Only-Look-Once (YOLOv5) and a k-dimensional (k-d) tree, respectively. An optimization-based decentralized motion planning algorithm is implemented to demonstrate the effectiveness of this motion capture system. Extensive experimental evaluations were performed to (1) compare the performance of four deep-learning algorithms for high-speed multi-quadrotor detection on event-based data, (2) study precision, recall, and F1 scores as functions of lighting conditions and camera motion, and (3) investigate the scalability of this system as a function of the number of quadrotors flying in the arena. Comparative analysis of the deep learning algorithms on a consumer-grade GPU demonstrates a 4.8× to 12× sampling/inference rate advantage that YOLOv5 provides over representative one- and two-stage detectors and a 1.14× advantage over YOLOv4. In terms of precision and recall, YOLOv5 performed 15% to 18% and 27% to 41% better than representative state-of-the-art deep learning networks. Graceful detection and tracking performance degradation was observed in the face of progressively darker ambient light conditions. Despite severe camera motion, YOLOv5 precision and recall values of 94% and 98% were achieved, respectively. Finally, experiments involving up to six indoor quadrotors demonstrated the scalability of this approach. This paper also presents the first open-source event camera dataset in the literature, featuring over 10,000 fully annotated images of multiple quadrotors operating in indoor and outdoor environments.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.