Abstract-This paper presents an intelligent sewing system for personalized stent graft manufacturing, a challenging sewing task that is currently performed manually. Inspired by medical suturing robots, we have adopted a single-sided sewing technique using a curved needle to perform the task of sewing stents onto fabric. A motorized surgical needle driver was attached to a 7 d.o.f robot arm to manipulate the needle with a second robot controlling the position of the mandrel. A learningfrom-demonstration approach was used to program the robot to sew stents onto fabric. The demonstrated sewing skill was segmented to several phases, each of which was encoded with a Gaussian Mixture Model. Generalized sewing movements were then generated from these models and were used for task execution. During execution, a stereo vision system was adopted to guide the robots to adjust the learnt movements according to the needle pose. Two experiments are presented here with this system and the results show that our system can robustly perform the sewing task as well as adapt to various needle poses. The accuracy of the sewing system was within 2mm.
This paper presents a multi-robot system for manufacturing personalized medical stent grafts. The proposed system adopts a modular design, which includes: a (personalized) mandrel module, a bimanual sewing module, and a vision module. The mandrel module incorporates the personalized geometry of patients, while the bimanual sewing module adopts a learning-by-demonstration approach to transfer human hand-sewing skills to the robots. The human demonstrations were firstly observed by the vision module and then encoded using a statistical model to generate the reference motion trajectories. During autonomous robot sewing, the vision module plays the role of coordinating multirobot collaboration. Experiment results show that the robots can adapt to generalized stent designs. The proposed system can also be used for other manipulation tasks, especially for flexible production of customized products and where bimanual or multirobot cooperation is required.
This paper presents a vision-based learning-bydemonstration approach to enable robots to learn and complete a manipulation task cooperatively. With this method, a vision system is involved in both the task demonstration and reproduction stages. An expert first demonstrates how to use tools to perform a task, while the tool motion is observed using a vision system. The demonstrations are then encoded using a statistical model to generate a reference motion trajectory. Equipped with the same tools and the learned model, the robot is guided by vision to reproduce the task. The task performance was evaluated in terms of both accuracy and speed. However, simply increasing the robot's speed could decrease the reproduction accuracy. To this end, a dual-rate Kalman filter is employed to compensate for latency between the robot and vision system. More importantly, the sampling rates of the reference trajectory and the robot speed are optimised adaptively according to the learned motion model. We demonstrate the effectiveness of our approach by performing two tasks: a trajectory reproduction task and a bimanual sewing task. We show that using our vision-based approach, the robots can conduct effective learning by demonstrations and perform accurate and fast task reproduction. The proposed approach is generalisable to other manipulation tasks, where bimanual or multi-robot cooperation is required.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.