The communication between a human and a humanoid robot is a real challenge for the researchers in the field of the robotics. Despite of the progress in the acoustic modelling and in the natural languages the humanoid robots are overtaken by the humans when the humanoid robots are engaged in the real life because the speech and the human emotions are extremely ambiguous due to the noises and the external audio events from the robot’s environment. The humans assign a correct interpretation to the perceived ambiguous signal, but the humanoids robots cannot interpret the ambiguous signal. The most common software used in the interpretation of the ambiguous signal is a fuzzy based software. The artificial neuro-fuzzy inference system, shortly known as ANFIS is the emotion recognition system based on the fuzzy sets which acts as the thalamus of the human brain and it is responsible for the sensorial perception of the humanoid robot. Our goal in this work is to create the fuzzy-based sound signals software and the fuzzy-based genetic algorithm with high performance in the communication between the human and the humanoid robots which help the humanoid robots to think, to understand the human speech and the human emotions and all the ambiguous signals from the robot’s environment in a way that it is distinguishable for every humanoid robot as the human.