In the aftermath of Donald Trump's 2016 electoral college victory, journalists focused heavily on the white working class (WWC) and the relationship between economic anxiety, racial attitudes, immigration attitudes, and support for Trump. One hypothesized but untested proposition for Donald Trump's success is that his unorthodox candidacy, particularly his rhetoric surrounding economic marginalization and immigration, shifted WWC voters who did not vote Republican in 2012 into his coalition. Using a large national survey we examine 1) whether racial and immigration attitudes or economic dislocation and marginality were the main correlates of vote switching, and; 2) whether this phenomenon was isolated among the white working class. We find a non-trivial number of white voters switched their votes in the 2016 election to Trump or Clinton, that this vote switching was more associated with racial and immigration attitudes than economic factors, and that the phenomena occurred among both working class and non-workingclass whites, though many more working-class whites switched than non-working class whites. Our findings suggest that racial and immigration attitudes may be continuing to sort white voters into new partisan camps and further polarize the parties.
This article explores the effect of explicitly racial and inflammatory speech by political elites on mass citizens in a societal context where equality norms are widespread and generally heeded yet a subset of citizens nonetheless possesses deeply ingrained racial prejudices. The authors argue that such speech should have an ‘emboldening effect’ among the prejudiced, particularly where it is not clearly and strongly condemned by other elite political actors. To test this argument, the study focuses on the case of the Trump campaign for president in the United States, and utilizes a survey experiment embedded within an online panel study. The results demonstrate that in the absence of prejudiced elite speech, prejudiced citizens constrain the expression of their prejudice. However, in the presence of prejudiced elite speech – particularly when it is tacitly condoned by other elites – the study finds that the prejudiced are emboldened to both express and act upon their prejudices.
Social scientists have long hand-labeled texts to create datasets useful for studying topics from congressional policymaking to media reporting. Many social scientists have begun to incorporate machine learning into their toolkits. RTextTools was designed to make machine learning accessible by providing a start-to-finish product in less than 10 steps. After installing RTextTools, the initial step is to generate a document term matrix. Second, a container object is created, which holds all the objects needed for further analysis. Third, users can use up to nine algorithms to train their data. Fourth, the data are classified. Fifth, the classification is summarized. Sixth, functions are available for performance evaluation. Seventh, ensemble agreement is conducted. Eighth, users can cross-validate their data. Finally, users write their data to a spreadsheet, allowing for further manual coding if required.
This article assesses the claim that sanctuary cities—defined as cities that expressly forbid city officials or police departments from inquiring into an individual’s immigration status—are associated with post hoc increases in crime. We employ a causal inference matching strategy to compare similarly situated cities where key variables are the same across the cities except the sanctuary status of the city. We find no statistically discernible difference in violent crime, rape, or property crime rates across the cities. Our findings provide evidence that sanctuary policies have no effect on crime rates, despite narratives to the contrary. The potential benefits of sanctuary cities, such as better incorporation of the undocumented community and cooperation with police, thus have little cost for the cities in question in terms of crime.
Text is becoming a central source of data for social science research. With advances in digitization and open records practices, the central challenge has in large part shifted away from availability to usability. Automated text classification methodologies are becoming increasingly important within political science because they hold the promise of substantially reducing the costs of converting text to data for a variety of tasks. In this paper, we consider a number of questions of interest to prospective users of supervised learning methods, which are appropriate to classification tasks where known categories are applied. For the right task, supervised learning methods can dramatically lower the costs associated with labeling large volumes of textual data while maintaining high reliability and accuracy. Information science researchers devote considerable attention to comparing the performance of supervised learning algorithms and different feature representations, but the questions posed are often less directly relevant to the practical concerns of social science researchers. The first question prospective social science users are likely to ask is -how well do such methods work? The second is likely to be -how much do they cost in terms of human labeling effort? Relatedly, how much do marginal improvements in performance cost? We address these questions in the context of a particular dataset -the Congressional Bills Project -which includes more than 400,000 labeled bill titles (19 policy topics). This corpus also provides opportunities to experiment with varying sample sizes and sampling methodologies. We are ultimately able to locate an accuracy/efficiency sweet spot of sorts for this dataset by leveraging results generated by an ensemble of supervised learning algorithms.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.