2001
DOI: 10.1177/0146167201274010
|View full text |Cite
|
Sign up to set email alerts
|

“That’s Completely Obvious... and Important”: Lay Judgments of Social Psychological Findings

Abstract: Many social psychologists believe that if research results are obvious, they are unimportant and uninteresting. The current study evaluated lay perceptions of social psychological research findings. Results from three studies reveal differences between lay evaluations of research and scientific evaluations. In Study 1, students with no prior exposure to social psychology judge the most obvious research findings to be the most important. In Study 2, students can predict findings, and the most predictable findin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
13
0

Year Published

2009
2009
2020
2020

Publication Types

Select...
5
2
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 18 publications
(13 citation statements)
references
References 31 publications
0
13
0
Order By: Relevance
“…In these studies, participants are shown either true fi ndings or their foils (i.e., false fi ndings). When presented with research fi ndings from the areas of personality (Barnett, 1986), developmental psychology (Barnett, Knust, McMillan, Kaufman, & Sinisi, 1988), and social psychology (Richard, Bond, & Stokes-Zoota, 2001), accuracy in distinguishing true fi ndings from foils ranged from 66%-75% (chance is 50%). Using a similar methodology, Wong (1995) found that when presented with both a true fi nding and a foil about educational research, participants were equally likely to select either version as the actual fi nding.…”
mentioning
confidence: 99%
See 1 more Smart Citation
“…In these studies, participants are shown either true fi ndings or their foils (i.e., false fi ndings). When presented with research fi ndings from the areas of personality (Barnett, 1986), developmental psychology (Barnett, Knust, McMillan, Kaufman, & Sinisi, 1988), and social psychology (Richard, Bond, & Stokes-Zoota, 2001), accuracy in distinguishing true fi ndings from foils ranged from 66%-75% (chance is 50%). Using a similar methodology, Wong (1995) found that when presented with both a true fi nding and a foil about educational research, participants were equally likely to select either version as the actual fi nding.…”
mentioning
confidence: 99%
“…Explanations for confi rmed beliefs may result in positive evaluations, whereas explanations for disconfi rmed beliefs may result in negative evaluations, as predicted by confi rmation bias. Alternatively, if simply reading about a research result can make a fi nding seem obvious in hindsight, then participants' evaluations about study quality and the obviousness of results should be similar regardless of prior expectations about the fi ndings (e.g., Richard et al, 2001;Wong, 1995). In addition, the presence of an explanation for the fi ndings may affect evaluations even when a belief is challenged.…”
mentioning
confidence: 99%
“…Sample size was determined (to ensure sufficient power) using an a priori power analysis (G*Power;Faul, Erdfelder, Lang, & Buchner, 2007), which assumed a medium effect size (characteristic of most social psychological findings). Specifically, with an assumed effect size of d = 0.4 (Richard, Bond, & Stokes-Zoota, 2001), our mixed design could achieve 80% power with as few as 72 participants, assuming a modest (r = 0.3) correlation between our repeated dependent measure. We decided to collect data until the end of the term, assuming we collected at least 72 participants.…”
Section: Participantsmentioning
confidence: 99%
“…A power analysis using G*Power (Faul, Erdfelder, Lang, & Buchner, 2007) was run to determine sample size. Using a mixed, pre-post design and assuming the typical moderate effect size in social psychology (i.e., r = .21, d = .43;Richard, Bond, & Stokes-Zoota, 2001) and high correlations between repeated measures typical of event-related potential (ERP) designs (r = .60 -.80; Segalowitz, Santesso, Murphy, Homan, Chantziantoniou, & Khan, 2010), we determined that a total sample size of 48 participants yields a power value of 0.90 for PeerJ reviewing PDF | (2017:03:16658:2:0:NEW 26 Apr 2017)…”
Section: Participantsmentioning
confidence: 99%