Gesture-based human-robot interaction has been an important area of research in recent years. The primary aspect for the researchers has always been to create a gesture detection system that is insensitive to lighting and backdrop surroundings. This research proposes a 3D gesture recognition and adaption system based on Kinect for human-robot interaction. The framework consists of four modules, i.e., pointing gesture recognition, 3D dynamic gesture recognition, gesture adaptation, and robot navigation. The proposed dynamic gesture recognition module employs three distinct classifiers: HMM, Multi-class SVM, and CNN. The adaptation module can adapt to new and unrecognized gestures applying semi-supervised self-adaptation or user consent-based adaptation. A Graphical User Interface (GUI) is built for training and testing the proposed system on the fly. A simple simulator along with two different robot-navigation algorithms are developed to test robot navigation based on the recognized gestures. The framework is trained and tested through a five-fold cross-validation method with a total of 3,600 gesture instances of ten predefined gestures performed by 24 persons (Three age categories: Young, Middle Aged, Adult; each with 1,200 gestures). The proposed system achieves a maximum accuracy score of 95.67% with HMM for Middle Aged category, 92.59% with SVM for Middle Aged category, and 89.58% with CNN for Young category in dynamic gesture recognition. Considering all the three age categories, the system achieves an average accuracy of 94.61%, 91.95%, and 88.97% in recognizing dynamic gestures with HMM, SVM, and CNN respectively. Moreover, the system recognizes pointing gestures in real-time.