2011
DOI: 10.1016/j.riob.2011.10.003
|View full text |Cite
|
Sign up to set email alerts
|

Best practices: How to evaluate psychological science for use by organizations

Abstract: We discuss how organizations can evaluate psychological science for its potential usefulness to their own purposes. Common sense is often the default but inadequate alternative, and bench-marking supplies only collective hunches instead of validated principles. External validity is an empirical process of identifying moderator variables, not a simple yes-no judgment about whether lab results replicate in the field. Hence, convincing criteria must specify what constitutes high-quality empirical evidence for org… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
4
0

Year Published

2015
2015
2019
2019

Publication Types

Select...
4
2

Relationship

2
4

Authors

Journals

citations
Cited by 9 publications
(5 citation statements)
references
References 156 publications
1
4
0
Order By: Relevance
“…We admit that both approaches to science are essential; the dimension of the ecological validity of method, and the dimension of generalisability of the conclusions. Nevertheless, we endorse the view that when it's difficult to gain both, it is more important to ensure generalisability of the principles rather than the exact operationalisations (Banaji and Crowder 1989;Fiske and Borgida 2011). Since the methodology of our study enables us to focus on well-known and general cognitive processes operational in data-based decisionmaking tasks in the laboratory, we argue that the results also generalise to naturalistic contexts where these same processes are at work outside laboratories (Banaji and Crowder 1989).…”
Section: Discussionsupporting
confidence: 59%
“…We admit that both approaches to science are essential; the dimension of the ecological validity of method, and the dimension of generalisability of the conclusions. Nevertheless, we endorse the view that when it's difficult to gain both, it is more important to ensure generalisability of the principles rather than the exact operationalisations (Banaji and Crowder 1989;Fiske and Borgida 2011). Since the methodology of our study enables us to focus on well-known and general cognitive processes operational in data-based decisionmaking tasks in the laboratory, we argue that the results also generalise to naturalistic contexts where these same processes are at work outside laboratories (Banaji and Crowder 1989).…”
Section: Discussionsupporting
confidence: 59%
“…Given this abundance of social scientific effort, each of these topics features a high‐quality scientific consensus that diversity‐performance relations are mixed and produce null or very small means when aggregated across studies. Such a consensus is fully worthy of presentation to advocates and policy makers (Fiske & Borgida, ). However, social scientists have a long way to go before understanding the mediators and moderators of relations between diversity and outcomes, most especially the causal relations that are involved.…”
Section: Social Scientists As Honest Brokersmentioning
confidence: 99%
“…We must convey the accumulated scientific consensus, not just today's cutting‐edge finding that may not replicate or survive vetting. We establish scientific consensus by a variety of means (Fiske & Borgida, ): meta‐analyses, narrative literature reviews, professional society consensus reports, surveys of experts, or adversarial collaboration.…”
Section: Policy Insights From Social Psychologymentioning
confidence: 99%