Whereas vision and force feedback -either at the wrist or at the joint level -for robotic manipulation purposes has received considerable attention in the literature, the benefits that tactile sensors can provide when combined with vision and force have been rarely explored.In fact, there are some situations in which vision and force feedback cannot guarantee robust manipulation. Vision is frequently subject to calibration errors, occlusions and outliers, whereas force feedback can only provide useful information on those directions that are constrained by the environment. In tasks where the visual feedback contains errors, and the contact configuration does not constrain all the cartesian degrees of freedom, vision and force sensors are not sufficient to guarantee a successful execution.Many of the tasks performed in our daily life that do not require a firm grasp belong to this category. Therefore, it is important to develop strategies for robustly dealing with these situations. In this article, a new framework for combining tactile information with vision and force feedback is proposed and validated with the M. Prats Computer Science and Engineering Department Jaume-I University, Castellón, Spain E-mail: mprats@icc.uji.es P.J. Sanz Computer Science and Engineering Department Jaume-I University, Castellón, Spain E-mail: sanzp@icc.uji.es A.P. del Pobil Computer Science and Engineering Department Jaume-I University, Castellón, Spain and Department of Interaction Science Sungkyunkwan University, Seoul, South Korea E-mail: pobil@icc.uji.es task of opening a sliding door. Results show how the vision-tactile-force approach outperforms vision-force and force-alone, in the sense that it allows to correct the vision errors at the same time that a suitable contact configuration is guaranteed.