In tele-impedance the human can control the impedance of the remote robot through various interfaces, in addition to controlling the motion. While this can improve the performance of the remote robot in unpredictable and unstructured environments, it can add more workload to the human operator compared to the classic teleoperation. This paper presents a novel method for a semi-autonomous teleimpedance, where the controller exploits the robot vision to detect the environment and selects the appropriate impedance. For example, if vision detects a fragile object like glass, the controller autonomously lowers the impedance to increase the safety, while the human is commanding the motion to initiate and perform the interaction. If the vision algorithm is not confident in its detection, we developed an additional verbal communication interface that enables the human to confirm or correct the autonomous decision. Therefore, the method has four modalities: (i) perturbation rejection mode, (ii) object property detection mode, (iii) verbal confirmation mode, (iv) voice control mode. We conducted proof-of-concept experiments on a teleoperation setup, where the human operator performed position tracking and contact establishing tasks.