This study examined how arguments contained in persuasive messages are represented and retrieved from memory. We proposed that there exist generic schemata that contain typical arguments supporting positions on familiar issues and that guide the representation and retrieval of message content in a manner similar to that hypothesized by Graesser and Nakamura's (1982) schema-copy-plus-tag model of prose memory. Our subjects read a political candidate's arguments for his position on several familiar social issues. The arguments varied in their perceived typicality for messages generally supporting each position. After either a 10-min or 2-day delay, subjects were given either a recall or recognition test for the message content. The results strongly supported the schema-copy-plus-tag model. Over time, more typical than atypical arguments were correctly recalled, and subjects' recall protocols snowed increasing clustering by typicality. However, recall of typical arguments was accompanied more by intrusion errors (false recalls) than was recall of atypical arguments. Furthermore, recognition discrimination was better for atypical arguments at both retention intervals, though this difference was less after two days. In general, fewer atypical than typical arguments were falsely recognized as having been stated in the message. The results are discussed in terms of their implications for studying the relation between attitudes and memory for message content.
Bayesian penalized regression techniques, such as the Bayesian lasso and the Bayesian horseshoe estimator, have recently received a significant amount of attention in the statistics literature. However, software implementing state-of-the-art Bayesian penalized regression, outside of general purpose Markov chain Monte Carlo platforms such as Stan, is relatively rare. This paper introduces bayesreg, a new toolbox for fitting Bayesian penalized regression models with continuous shrinkage prior densities. The toolbox features Bayesian linear regression with Gaussian or heavy-tailed error models and Bayesian logistic regression with ridge, lasso, horseshoe and horseshoe+ estimators. The toolbox is free, open-source and available for use with the MATLAB and R numerical platforms.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.