In this paper we present a system based on the Microsoft Kinect TM sensor, aimed at the automatic detection of risk postures during human work activities. We introduce a pick and place task, where three different lateral standing subjects move light cardboard boxes from the various levels of a bookcase to its top, and then putting them back to their original places. They repeat the task over several work cycles and we capture all their natural movements in a continuous way using Kinect, storing the joint positions and the color images. From the joint positions, our system detects specific risk postures following the definitions of the Rapid Upper Limb Assessment (RULA) method. We compare the posture detections made by our system with a baseline composed of the detections made by a committee of five experts who used the captured color images. In our study we find that it was hard for the experts to distinguish among some RULA postures during a work cycle because of the narrow detection margin and the difficulty to perceive if a limb reached a certain position; which is particularly true for the wrist and neck. This leads to a larger false positive rate, with our system detecting postures that experts do not; and to a lower general accuracy. Thus, in our system we apply a relaxation in the detection margin on the outputs of Kinect; increasing the accuracy to 0.93 in the comparison with the expert committee with only ±1 • of relaxation, which is negligible for human perception. Our results show the suitability of Kinect for lateral risk posture detection in pick and place activities.
Automatic recognition of traffic signs in complex, real-world environments has become a pressing research concern with rapid improvements of smart technologies. Hence, this study leveraged an industry-grade object detection and classification algorithm (You-Only-Look-Once, YOLO) to develop an automatic traffic sign recognition system that can identify widely used regulatory and warning signs in diverse driving conditions. Sign recognition performance was assessed in terms of weather and reflectivity to identify the limitations of the developed system in real-world conditions. Furthermore, we produced several editions of our sign recognition system by gradually increasing the number of training images in order to account for the significance of training resources in recognition performance. Analysis considering variable weather conditions, including fair (clear and sunny) and inclement (cloudy and snowy), demonstrated a lower susceptibility of sign recognition in the highly trained system. Analysis considering variable reflectivity conditions, including sheeting type, lighting conditions, and sign age, showed that older engineering-grade sheeting signs were more likely to go unnoticed by the developed system at night. In summary, this study incorporated automatic object detection technology to develop a novel sign recognition system to determine its real-world applicability, opportunities, and limitations for future integration with advanced driver assistance technologies.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.