In this work, we introduce the disclosure-outcomes management model, which offers propositions to explain intelligence interviewees’ mental representation and disclosure of information. The model views disclosure as a behavior that interviewees implement to maximize their self-interests. We theorize that interviewees cooperate by managing their disclosures in response to self-interest dilemmas. That is, interviewees compare the potential outcomes of disclosing to their self-interests and estimate the extent to which the behavior will facilitate or impede those self-interests. That is to say, an interviewee’s self-interest dilemma elicits cooperation with respect to some information but not other information. We discuss how this model fits with and advances the paradigm of intelligence interviewing research.
To avoid concerns of manipulation, nudges should be transparent to the people affected by the intervention. Whether increasing the transparency of a nudge also leads to more favorable perceptions of the nudge is however not certain, and may depend on the circumstances of the evaluation. Across three preregistered experiments (N = 1915), we study how increased transparency affects the perceived fairness of a default nudge, in joint vs. separate, and description- vs. experience-based evaluations. We find that transparency increases perceived fairness of the nudge in a joint comparison, when the relative benefits of transparency are easy to see. However, in a real choice-context, with nothing to compare against, transparency instead decreases perceived fairness. Efforts to make nudges more ethical may thus ironically make choice architects perceived as less ethical. Additionally, we find that the transparent default nudge still successfully affects behavior, that different default-settings communicate different perceived intentions of the choice architect, and that participants consistently favor opt-in defaults over opt-out defaults nudges – regardless of their level of transparency.
Our aim was to examine how people communicate their true and false intentions. Based on construal level theory (Trope & Liberman, 2010), we predicted that statements of true intentions would be more concretely phrased than statements of false intentions. True intentions refer to more likely future events than false intentions, they should be mentally represented at a lower level of mental construal. This should be mirrored in more concrete language use. Transcripts of truthful and deceptive statements about intentions from six previous experimental studies (total N = 528) were analyzed using two automated verbal content analysis approaches: a folk-conceptual measure of concreteness (Brysbaert, Warriner, and Kuperman, 2014) and linguistic category model scoring (Seih, Beier, & Pennebaker, 2017). Contrary to our hypotheses, veracity did not predict statements’ concreteness scores, suggesting that automated verbal analysis of linguistic concreteness is not a viable deception-detection technique for intentions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.