We propose an interface for the tele-operation of a laparoscope-holder robot via head movement using facial feature point detection. Fourteen feature points on the operator's face are detected using a camera. The vertical and horizontal rotation angles and the distance between the face and the camera are estimated from the points using deep learning. The training data for deep learning are obtained using a dummy face. The root-mean-square error (RMSE) between the estimated and directly measured values is calculated for different numbers of nodes, layers, and epochs, and suitable numbers are determined from the RMSE values. The trained data are evaluated with four subjects. The effectiveness of the proposed method is demonstrated experimentally.