Humans have large social networks, with hundreds of interacting individuals. How does the brain represent the complex connectivity structure of these networks? Here we used social media (Facebook) data to objectively map participants' real-life social networks. We then used representational similarity analysis (RSA) of functional magnetic resonance imaging (fMRI) activity patterns to investigate the neural coding of these social networks as participants reflected on each individual. We found coding of social network distances in the default-mode network (medial prefrontal, medial parietal, and lateral parietal cortices). When using partial correlation RSA to control for other factors that can be correlated to social distance (personal affiliation, personality traits. and visual appearance, as subjectively rated by the participants), we found that social network distance information was uniquely coded in the retrosplenial complex, a region involved in spatial processing. In contrast, information on individuals' personal affiliation to the participants and personality traits was found in the medial parietal and prefrontal cortices, respectively. These findings demonstrate a cortical division between representations of non-self-referenced (allocentric) social network structure, self-referenced (egocentric) social distance, and trait-based social knowledge.
To successfully navigate our social world, we keep track of other individuals' relations to ourselves and to each other. But how does the brain encode this information? To answer this question, we mined participants' social media (Facebook TM ) profiles to objectively characterize the relations between individuals in their real-life social networks. Under fMRI, participants answered questions on each of these individuals. Using representational similarity analysis, we identified social network structure coding in the defaultmode network (medial prefrontal, medial parietal and lateral parietal cortices). When regressing out subjective factors (ratings of personal affiliation, appearance and personality), social network structure information was uniquely found in the retrosplenial complex, a region implicated in spatial processing. In contrast, information on individuals' personality traits and affiliation to the subjects was found in the medial prefrontal and parietal cortices, respectively. These findings demonstrate a cortical division between representation of structural, trait-based and self-referenced social knowledge.
Machine learning generated content such as image artworks, textual poems and music become prominent in recent years. These tools attract much attention from the media, artists, researchers, and investors. Because these tools are data-driven, they are inherently different than the traditional creative tools which arises the question -who may own the content that is generated by these tools? In this paper we aim to address this question, we start by providing a background to this problem, raising several candidates that may own the content and arguments for each one of them. Then we propose a possible algorithmic solution in the vision-based model's regime. Finally, we discuss the broader implications of this problem.
The opaque nature and unexplained behavior of transformer-based language models (LMs) have spurred a wide interest in interpreting their predictions. However, current interpretation methods mostly focus on probing models from outside, executing behavioral tests, and analyzing salience input features, while the internal prediction construction process is largely not understood. In this work, we introduce LM-Debugger, an interactive debugger tool for transformer-based LMs, which provides a fine-grained interpretation of the model's internal prediction process, as well as a powerful framework for intervening in LM behavior. For its backbone, LM-Debugger relies on a recent method that interprets the inner token representations and their updates by the feed-forward layers in the vocabulary space. We demonstrate the utility of LM-Debugger for single-prediction debugging, by inspecting the internal disambiguation process done by GPT2. Moreover, we show how easily LM-Debugger allows to shift model behavior in a direction of the user's choice, by identifying a few vectors in the network and inducing effective interventions to the prediction process. We release LM-Debugger as an open-source tool and a demo over GPT2 models.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.