The purpose of this paper is to develop a speakerindependent emotion recognition system for emotional interaction between humans and robots. Recognizing human emotion from speech is one of the challenges in the field of human-robot interaction. The ability to recognize emotions from an unspecified human, called speaker-independent emotion recognition, is important for commercial use in speech emotion recognition systems. However, generally, speakerindependent systems show lower performance compared with speaker-dependent systems, as emotional feature values depend on the speaker and his/her gender. Hence, this paper describes the realization of speaker-independent emotion recognition based on separation and rejection to make the emotion recognition system accurate and stable. Through comparison of the proposed methods with conventional method, the improvement and effectiveness of proposed methods were clearly confirmed.