Recent evolutions in the e-commerce market have led to an increasing importance attributed by consumers to product reviews made by third parties before proceeding to purchase. The industry, in order to improve the offer intercepting the discontent of consumers, has placed increasing attention towards systems able to identify the sentiment expressed by buyers, whether positive or negative. From a technological point of view, the literature in recent years has seen the development of two types of methodologies: those based on lexicons and those based on machine and deep learning techniques. This study proposes a comparison between these technologies in the Italian market, one of the largest in the world, exploiting an ad hoc dataset: scientific evidence generally shows the superiority of language models such as BERT built on deep neural networks, but it opens several considerations on the effectiveness and improvement of these solutions when compared to those based on lexicons in the presence of datasets of reduced size such as the one under study, a common condition for languages other than English or Chinese.
This work proposes a new approach to deception detection, based on finding significant differences between liars and truth tellers through the analysis of their behavior, verbal and non-verbal. This is based on the combination of two factors: multimodal data collection, and t-pattern analysis. Multimodal approach has been acknowledged in literature about deception detection and on several studies concerning the understanding of any communicative phenomenon. We believe a methodology such as T-pattern analysis could be able to get the best advantages from an approach that combines data coming from multiple signaling systems. In fact, T-pattern analysis is a recent methodology for the analysis of behavior that unveil the complex structure at the basis of the organization of human behavior. For this work, we conducted an experimental study and analyzed data related to a single subject. Results showed how T-pattern analysis allowed to find differences between truth telling and lying. This work aims at making progress in the state of knowledge about deception detection, with the final goal to propose a useful tool for the improvement of public security and well-being
This paper presents a Lexicon-Grammar based method for automatic extraction of spatial relations from Italian non-structured data. We used the software Nooj to build sophisticated local grammars and electronic dictionaries associated with the lexicon-grammar classes of the Italian intransitive spatial verbs (i.e. 234 verbal entries) and we applied them to the Italian text Il Codice da Vinci ('The Da Vinci Code', by Dan Brown) in order to parse the spatial predicate-arguments structures. In addition, Nooj allowed us to automatically annotate (in XML format) the words (or the sequence of words) that in each sentence (S) of the text play the 'spatial roles' of Figure (F), Motion (M) and Ground (G). Finally the results of the experiment and the evaluation of this method will be discussed
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.