International audienceAutomatic emotion recognition enhance dramatically the development of human/machine dialogue. Indeed, it allows computers to determine the emotion felt by the user and adapt consequently its behavior. This paper presents a new method for the fusion of signals for the purpose of a multimodal recognition of eight basic emotions using physiological signals. After a learning phase where an emotion data base is constructed, we apply the recognition algorithm on each modality separately. Then, we merge all these decisions separately by applying a decision fusion approach to improve recognition rate. The experiments show that the proposed method allows high accuracy emotion recognition. Indeed we get a recognition rate of 81.69% under some conditions
International audienceEnergy consumption is one of the main constraints which are faced by designers of communicating objects. Indeed, we are seeing a real increase in the number of communicating objects ranging from customer applications (video games, health care objects) to industrial applications (M2M, automatic car parking, drones). This paper presents a new high level model of autonomy estimation for wearable communicating objects, and more particularly for emotion detection systems and a design space exploration. The innovation in this methodology is to meet the autonomy constraint of health objects while maintaining a reasonable performance which is defined by the emotion recognition rate. The influence of the architecture (RF protocol, kind and number of sensors, ...) and its configuration of the system autonomy and emotion recognition rate is studied in order to propose the most suitable system. The different results show that we can get a high recognition rate and a sufficient autonomy for users
This demo paper presents a system that builds a timeline with salient actions of a soccer game, based on the tweets posted by users. It combines information provided by external knowledge bases to enrich the content of tweets and applies graph theory to model relations between actions (e.g. goals, penalties) and participants of a game (e.g. players, teams). In the demo, a web application displays in nearly real-time the actions detected from tweets posted by users for a given match of Euro 2016. Our tools are freely available at https://bitbucket.org/ eamosse/event_tracking.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.