1978
DOI: 10.1177/0193841x7800200305
|View full text |Cite
|
Sign up to set email alerts
|

Decision Makers' Judgments

Abstract: Evaluators often assume that outcome studies assessing agency effectiveness should provide the most relevant data for decision makers who must form judgments about treatment in order to make policy, program, and clinical decisions. Yet evaluators have found that decision makers often fail to use results of evaluation studies. To shed light on the utilization problem, the research reported here was undertaken to learn about the criteria, information sources, and beliefs decision makers in the environment of nin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
3
0

Year Published

1980
1980
1987
1987

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 28 publications
(3 citation statements)
references
References 10 publications
0
3
0
Order By: Relevance
“…Reliance on evaluations can best be understood within the broader context of administrative information utilization (Bigelow, 1975;Cox, 1977;Hawkins, Roffman, & Osborne, 1978). Administrators rely on different kinds of information when making a decision.…”
mentioning
confidence: 99%
See 1 more Smart Citation
“…Reliance on evaluations can best be understood within the broader context of administrative information utilization (Bigelow, 1975;Cox, 1977;Hawkins, Roffman, & Osborne, 1978). Administrators rely on different kinds of information when making a decision.…”
mentioning
confidence: 99%
“…Informal sources represent those using ad hoc (and perhaps, even haphazard) information collection methods, such as chance personal observations and conversations. Hawkins, Roffman, and Osborne (1978) concluded that reliance on informal sources of information such as personal contacts is common among decisionmakers. Mintzberg (1975) proposed that many problems in managerial work arise because managers prefer or rely on informal sources of information.…”
mentioning
confidence: 99%
“…Underutilization persists despite methodological rigor and valid findings showing differential program effectiveness. A number of reasons for underutilization have been suggested, including weaknesses in methodology, the quality, pertinence and timeliness of the information, the political context of evaluation research, organizational constraints, and the conflicting cultures of scientists and practitioners (Hawkins, et at., 1978).…”
mentioning
confidence: 99%