Recent studies have been trying to use voices and body gestures similar to human communication methods to control intelligent devices such as robots instead of giving commands using a keyboard or a mouse. Human being obtains about 80% of information using vision, and about 55% of all meaning of communication is related to vision. In addition, hand gestures are most frequently used in non-verbal communication. Therefore, there have been many researches to give commands to a robot. Existing studies were limited to recognition of fixed hand gestures and shapes, requiring users to be educated on motions that can be used to communicate with robots. To solve this problem, we use fuzzy inference to select meaningful gesture in continuous gesture at first. Hand position is interpolated using the Lagrangian method. Also, we use the Kalman filter for object-occlusion or self-occlusion. After generating a sequence of continuously received hand positions using a chain code, the fuzzy theory is used to select meaningful motions among various hand gestures. Eventually, we perform a method of recognizing meaningful hand gestures using a recurrent neural network with bidirectional long short-term memory (LSTM) architecture. Even though selecting the meaningful motion was hard, if it was selected correctly, the ratio of hand gesture recognition was very high in experiment results.