The purpose of this study was to develop an auditory emotion recognition function that could determine the emotion caused by sounds coming from the environment in our daily life. For this purpose, sound stimuli from the International Affective Digitized Sounds (IADS-2), a standardized database of sounds intended to evoke emotion, were selected, and four psychoacoustic parameters (i.e., loudness, sharpness, roughness, and fluctuation strength) were extracted from the sounds. Also, by using an emotion adjective scale, 140 college students were tested to measure three basic emotions (happiness, sadness, and negativity). From this discriminant analysis to predict basic emotions from the psychoacoustic parameters of sound, a discriminant function with overall discriminant accuracy of 88.9 % was produced from training data. In order to validate the discriminant function, the same four psychoacoustic parameters were extracted from 46 sound stimuli collected from another database and substituted into the discriminant function. The results showed that an overall discriminant accuracy of 63.04 % was confirmed. Our findings provide the possibility that daily-life sounds, beyond voice and music, can be used in a human-machine interface.