Models trained to estimate word probabilities in context have become ubiquitous in natural language processing. How do these models use lexical cues in context to inform their word probabilities? To answer this question, we present a case study analyzing the pre-trained BERT model with tests informed by semantic priming. Using English lexical stimuli that show priming in humans, we find that BERT too shows "priming," predicting a word with greater probability when the context includes a related word versus an unrelated one. This effect decreases as the amount of information provided by the context increases. Followup analysis shows BERT to be increasingly distracted by related prime words as context becomes more informative, assigning lower probabilities to related words. Our findings highlight the importance of considering contextual constraint effects when studying word prediction in these models, and highlight possible parallels with human processing.
PurposeThis study aims to predict popular contributors through text representations of user-generated content in open crowds.Design/methodology/approachThree text representation approaches – count vector, Tf-Idf vector, word embedding and supervised machine learning techniques – are used to generate popular contributor predictions.FindingsThe results of the experiments demonstrate that popular contributor predictions are considered successful. The F1 scores are all higher than the baseline model. Popular contributors in open crowds can be predicted through user-generated content.Research limitations/implicationsThis research presents brand new empirical evidence drawn from text representations of user-generated content that reveals why some contributors' ideas are more viral than others in open crowds.Practical implicationsThis research suggests that companies can learn from popular contributors in ways that help them improve customer agility and better satisfy customers' needs. In addition to boosting customer engagement and triggering discussion, popular contributors' ideas provide insights into the latest trends and customer preferences. The results of this study will benefit marketing strategy, new product development, customer agility and management of information systems.Originality/valueThe paper provides new empirical evidence for popular contributor prediction in an innovation crowd through text representation approaches.
Twitter serves as a data source for many Natural Language Processing (NLP) tasks. It can be challenging to identify topics on Twitter due to continuous updating data stream. In this paper, we present an unsupervised graph based framework to identify the evolution of sub-topics within two weeks of real-world Twitter data. We first employ a Markov Clustering Algorithm (MCL) with a node removal method to identify optimal graph clusters from temporal Graph-of-Words (GoW). Subsequently, we model the clustering transitions between the temporal graphs to identify the topic evolution. Finally, the transition flows generated from both computational approach and human annotations are compared to ensure the validity of our framework.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.