In this paper, we propose an exteroceptive sensing based framework to achieve safe human-robot interaction during shared tasks. Our approach allows a human to operate in close proximity with the robot, while pausing the robot’s motion whenever a collision between the human and the robot is imminent. The human’s presence is sensed by a N-range sensor based system, which consists of multiple range sensors mounted at various points on the periphery of the work cell. Each range sensor is based on a Microsoft Kinect sensor. Each sensor observes the human and outputs a 20 DOF human model. Positional data of these models are fused together to generate a refined human model. Next, the robot and the human model are approximated by dynamic bounding spheres and the robot’s motion is controlled by tracking the collisions between these spheres. Whereas most previous exteroceptive methods relied on depth data from camera images, our approach is one of the first successful attempts to build an explicit human model online and use it to evaluate human-robot interference. Real-time behavior observed during experiments with a 5 DOF robot and a human safely performing shared assembly tasks validate our approach.
This paper presents the design of an instruction generation system that can be used to automatically generate instructions for complex assembly operations performed by humans on factory shop floors. Multimodal information−text, graphical annotations, and 3D animations− is used to create easy-to-follow instructions. This thereby reduces learning time and eliminates the possibility of assembly errors. An automated motion planning subsystem computes a collision-free path for each part from its initial posture in a crowded scene onto its final posture in the current subassembly. Visualization of this computed motion results in generation of 3D animations. The system also consists of an automated part identification module that enables the human to identify, and pick, the correct part from a set of similar looking parts. The system's ability to automatically translate assembly plans into instructions enables a significant reduction in the time taken to generate instructions and update them in response to design changes.
INTRODUCTIONProduct manufacturing involves complex assembly operations that must be performed by humans and/or robots. Within this setting, it is imperative to define the operational roles of the human and the robot appropriately. Whereas robots are superior to humans at handling repetitive tasks like welding and bolting, humans are better at performing tasks like picking, carrying, and placing a wide range of parts without using special fixtures; humans also have a natural ability to handle several assembly equipment easily. However, humans are prone to committing assembly related mistakes. Human workers usually follow a list of instructions to carry out assembly operations on the shop floor. However, poor instructions lead to assembly errors and increased learning time. Therefore, there is a need for effective, and yet, easy-to-follow assembly instructions for humans. Manual generation of such high quality instructions is a time consuming task when utilizing shared setups and tools. This thereby motivates the need for automated generation of instructions for human workers.In this paper, we present the design of an instruction generation system that can be used to automatically generate instructions
This paper presents a framework to build hybrid cells that support safe and efficient human–robot collaboration during assembly operations. Our approach allows asynchronous collaborations between human and robot. The human retrieves parts from a bin and places them in the robot's workspace, while the robot picks up the placed parts and assembles them into the product. We present the design details of the overall framework comprising three modules—plan generation, system state monitoring, and contingency handling. We describe system state monitoring and present a characterization of the part tracking algorithm. We report results from human–robot collaboration experiments using a KUKA robot and a three-dimensional (3D)-printed mockup of a simplified jet-engine assembly to illustrate our approach.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.