Table plane detection in the scene is a prerequisite step in developing object-findingaided systems for visually impaired people. In order to determine the table plane in the scene, we have to detect planes in the scene first and then define the table from these detected planes based on the specific characteristics. Although a number of approaches have been proposed for plane segmentation, it still lacks proper table plane detection. In this paper, the authors propose a table plane detection method using information coming from a Microsoft Kinect sensor. The contribution of the paper is threefold. First, for plane detection step, the dedicated down-sampling algorithms to original point cloud thereby representing it as the organized point cloud structure in are applied to get real-time computation. Second, the acceleration information provided by the Kinect sensor is employed to detect the table plane among all detected planes. Finally, three different measures for the evaluation of the table plane detector are defined. The proposed method has been evaluated using a dataset of 10 scenes and published RGB-D dataset which are common contexts in daily activities of visually impaired people. The proposed method outperforms the state-of-the-art method based on PROSAC and obtains a comparable result as a method based on organized point cloud where the frame rate is six times higher.