The idea of acquiring the state of human emotions from one’s speech, we have gathered required data that makes one to understand the concept behind this process. Human emotions can be predicted by his/her facial expressions or by the tone of their voice. Reading the facial expressions is one of the major tasks involved in image processing. Likewise, each emotion holds different tone in one's voice. It requires a various emotional tone frequency to calculate and analyse the emotions. We need to fetch approximate frequencies of emotions. It’s the challenging task as each speaker has various pitches while speaking and frequencies of the same person varies in his emotion. Another main issue is the noise in the input while a person is speaking, due to less quality recordings or surrounding environment. List of basic emotions are Happy, angry, sad, bored, surprised, disgust, fear. For this project the prior important concept is speech recognition. The machine must be capable of reading the input in form as speech and must be capable of analysing various contents. The input given is converted into wav format. At the same time machine must be also capable of fetching the frequencies. The calculation is performed using many methodologies that are defined.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.