Abstract-We present a novel system to achieve coordinated task-based control on a dual-arm industrial robot for the general tasks of visual servoing and bimanual hybrid motion/force control. The industrial robot, consisting of a rotating torso and two seven degree-of-freedom arms, performs autonomous visionbased target alignment of both arms with the aid of fiducial markers, two-handed grasping and force control, and robust object manipulation in a tele-robotic framework. The operator uses hand motions to command the desired position for the object via Microsoft Kinect while the autonomous force controller maintains a stable grasp. Gestures detected by the Kinect are also used to dictate different operation modes. We demonstrate the effectiveness of our approach using a variety of common objects with different sizes, shapes, weights, and surface compliances.Note to Practitioners-Industrial robots traditionally are preprogrammed with teach pendants to perform simple repetitive tasks without any sensor feedback. This work was motivated by demonstrating that industrial robots can also perform advanced, sensor-based tasks such as visual servoing, force-feedback control, and tele-operation. Industrial robots are typically limited by the long delay between command and action, but with careful tuning, we show that these sensor-based methods are still feasible even with off-the-shelf sensors.The specific experimental testbed involves a 15 degree-offreedom dual-arm industrial robot with each wrist outfitted with a camera, a rubber contact pad, and a force/torque sensor. A Microsoft Kinect is used to communicate operator commands through gesture. The integrated system involves seven processes running on three computers (2 running Windows 7, 1 running Windows XP) connected through a local hub using the TCP/IP protocol. The communication between the components is based on an object-oriented distributed control and communication software architecture called Robot Raconteur.Though the implementation is for our specific testbed, the approach is sufficiently general to be extended to other robots, end effectors, sensors, and operating systems.