In a human-robot collaborative production system, the robot could make request for interaction or notify the human operator if an uncertainty arises. Conventional industrial tower lights were designed for generic machine signalling purposes which may not be the ultimate solution for robot signalling in a collaborative setting. In this type of system, human operators could be monitoring multiple robots while carrying out a manual task so it is important to minimise the diversion of their attention. This paper presents a novel robot signalling solution, the Robot Light Skin(RLS),which is an integrated signalling system that could be used on most articulated robots. Our experiment was conducted to validate this concept in terms of its effect on improving operator's reaction time, hitrate, awareness and task performance. The results showed that participants reacted faster to the RLS as well as achieved higher hit-rate. An eye tracker was used in the experiment which shows a reduction in diversion away from the manual task when using the RLS. Future study should explore the effect of the RLS concept on large-scale systems and multi-robot systems.
It is going to be increasingly important for manufacturing system designers to incorporate human activity data and ergonomic analysis with other performance data in digital design modelling and system monitoring. However, traditional methods of capturing human activity data are not sufficiently accurate to meet the needs of digitised data analysis; qualitative data is subject to bias and imprecision and optically derived data is hindered by occlusions caused by structures or other people in a working environment. Therefore, to meet contemporary needs for more accurate and objective data, inertial non-optical methods of measurement appear to offer a solution. This paper describes a case study conducted within the aerospace manufacturing industry, where data on the human activities involved in aircraft wing systems installations was first collected via traditional ethnographic methods and found to have limited accuracy and
This paper presents a novel method for dynamic alignment control using infrared light depth imagery to enable automated wheel loading operation for the trim and final automotive assembly line. A key requirement for automated wheel loading is to track the motion of the wheel hub and simultaneously identify the spatial positions and angular orientations of its alignment features in real-time on a moving vehicle body. This requirement is met in this work, where low-cost infrared depth-imaging devices like Microsoft Kinect™ and Asus Xtion™, vastly popular in the gaming industry, are used to track a moving wheel hub and recognise alignment features on both the wheel hub and the wheel in real time in a laboratory environment. Accurate control instructions are then computed to instruct the automation system to rotate the wheel to achieve precise alignment with the wheel hub and load the wheel at the right time. Experimental results demonstrate that the reproducibility error in alignment control satisfies the assembly tolerance of 2mm for the wheel loading operation, and thus the proposed method can be applied to automate wheel assembly on the trim and final automotive assembly line. The novelty of this work lies in its use of depth imaging for dynamic alignment control, which provides real-time spatial data in all 3 axes simultaneously as against the popularly reported RGB imaging techniques that are computationally more demanding, sensitive to ambient lighting and require the use of additional force sensors to obtain depth axis control data. This paper demonstrates the concept of a light-controlled factory where depth imaging using infrared light and depth image analysis is used to enable intelligent control in automation.
Several manufacturing operations continue to be manual even in today's highly automated industry because the complexity of such operations makes them heavily reliant on human skills, intellect and experience. This work aims to aid the automation of one such operation, the wheel loading operation on the trim and final moving assembly line in automotive production. It proposes a new method that uses multiple low-cost depth imaging sensors, commonly used in gaming, to acquire and digitise key shopfloor data associated with the operation, such as motion characteristics of the vehicle body on the moving conveyor line and the angular positions of alignment features of the parts to be assembled, in order to inform an intelligent automation solution. Experiments are conducted to test the performance of the proposed method across various assembly conditions, and the results are validated against an industry standard method using laser tracking. Some disadvantages of the method are discussed, and suggestions for improvements are suggested. The proposed method has the potential to be adopted to enable the automation of a wide range of moving assembly operations in multiple sectors of the manufacturing industry.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.