Emojis are used frequently in social media. A widely assumed view is that emojis express the emotional state of the user, which has led to research focusing on the expressiveness of emojis independent from the linguistic context. We argue that emojis and the linguistic texts can modify the meaning of each other. The overall communicated meaning is not a simple sum of the two channels. In order to study the meaning interplay, we need data indicating the overall sentiment of the entire message as well as the sentiment of the emojis stand-alone. We propose that Facebook Reactions are a good data source for such a purpose. FB reactions (e.g. "Love" and "Angry") indicate the readers' overall sentiment, against which we can investigate the types of emojis used the comments under different reaction profiles. We present a data set of 21,000 FB posts (57 million reactions and 8 million comments) from public media pages across four countries.
It is assumed that there are a static set of “language regions” in the brain. Yet, language comprehension engages regions well beyond these, and patients regularly produce familiar “formulaic” expressions when language regions are severely damaged. These suggest that the neurobiology of language is not fixed but varies with experiences, like the extent of word sequence learning. We hypothesized that perceiving overlearned sentences is supported by speech production and not putative language regions. Participants underwent 2 sessions of behavioral testing and functional magnetic resonance imaging (fMRI). During the intervening 15 days, they repeated 2 sentences 30 times each, twice a day. In both fMRI sessions, they “passively” listened to those sentences, novel sentences, and produced sentences. Behaviorally, evidence for overlearning included a 2.1-s decrease in reaction times to predict the final word in overlearned sentences. This corresponded to the recruitment of sensorimotor regions involved in sentence production, inactivation of temporal and inferior frontal regions involved in novel sentence listening, and a 45% change in global network organization. Thus, there was a profound whole-brain reorganization following sentence overlearning, out of “language” and into sensorimotor regions. The latter are generally preserved in aphasia and Alzheimer’s disease, perhaps explaining residual abilities with formulaic expressions in both.
There is a widespread assumption that there are a static set of ‘language regions’ in the brain. Yet, people still regularly produce familiar ‘formulaic’ expressions when those regions are severely damaged. This suggests that the neurobiology of language varies with the extent of word sequence learning and might not be fixed. We test the hypothesis that perceiving sentences is mostly supported by sensorimotor regions involved in speech production and not ‘language regions’ after overlearning. Twelve participants underwent two sessions of behavioural testing and functional magnetic resonance imaging (fMRI), separated by 15 days. During this period, they repeated two sentences 30 times each, twice a day. In both fMRI sessions, participants ‘passively’ listened to those two sentences and novel sentences. Lastly, they spoke novel sentences. Behavioural results confirm that participants overlearned sentences. Correspondingly, there was an increase or recruitment of sensorimotor regions involved in sentence production and a reduction in activity or inactivity for overlearned sentences in regions involved in listening to novel sentences. The global network organization of the brain changed by ∼45%, mostly through lost connectivity. Thus, there was a profound reorganization of the neurobiology of speech perception after overlearning towards sensorimotor regions not considered in most contemporary models and away from the ‘language regions’ posited by those models. These same sensorimotor regions are generally preserved in aphasia and Alzheimer’s disease, perhaps explaining residual abilities with formulaic language. These and other results warrant reconsidering static neurobiological models of language.
We compare the processing of relative clauses in comprehension (self-paced reading) and production (planned production). We manipulated the locality of two syntactic dependencies: filler-gap (subject vs object gap) and subject-verb (center-embedded vs right-branched). The non-local filler-gap dependency resulted in a longer embedded predicate duration, across domains, consistent with memory-based accounts. For the non-local subject-verb dependency, we observe longer reading times at the main verb, but in production a greater likelihood and duration of a pause preceding the main verb. We argue that this result stems from the cost of computing the restriction, which manifests as a prosodic break. In the context of the subject-verb dependency manipulation, we also revisit the source of interpretation breakdown in multiple centerembedding. Generally, our findings imply that memory-based accounts are adequate for filler-gap, but not subject-verb, dependencies and production studies can aid in understanding complexity effects.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.