This section includes shorter papers (e.g., 10-15 double-spaced manuscript pages or less) describing methods and techniques that can improve evaluation practice. Method notes may include reports of new evaluation tools, products, and/or services that are useful for practicing evaluators. Alternatively, they may describe new uses of existing tools. Also appropriate for this section are user-friendly guidelines for the proper use of conventional tools and methods, particularly for those that are commonly misused in practice.Abstract: Collaboration is a prerequisite for the sustainability of interagency programs, particularly those programs initially created with the support of time-limited grant-funding sources. From the perspective of evaluators, however, assessing collaboration among grant partners is often difficult. It is also challenging to present collaboration data to stakeholders in a way that is meaningful. In this article, the authors introduce the Levels of Collaboration Scale, which was developed from existing models and instruments. The authors extend prior work on measuring collaboration by exploring the reliability of the scale and developing a format for visually displaying the results obtained from using the scale.
Subjects were asked to select from among four possible sequences the “most likely” to result from flipping a coin five times. Contrary to the results of Kahneman and Tversky (1972), the majority of subjects (72%) correctly answered that the sequences are equally likely to occur. This result suggests, as does performance on similar NAEP items, that most secondary school and college-age students view successive outcomes of a random process as independent. However, in a follow-up question, subjects were also asked to select the “least likely” result. Only half the subjects who had answered correctly responded again that the sequences were equally likely; the others selected one of the sequences as least likely. This result was replicated in a second study in which 20 subjects were interviewed as they solved the same problems. One account of these logically inconsistent responses is that subjects reason about the two questions from different perspectives. When asked to select the most likely outcome, some believe they are being asked to predict what actually will happen, and give the answer “equally likely” to indicate that all of the sequences are possible. This reasoning has been described by Konold (1989) as an “outcome approach” to uncertainty. This prediction scheme does not fit questions worded in terms of the least likely result, and thus some subjects select an incompatible answer based on “representativeness” (Kahneman & Tversky, 1972). These results suggest that the percentage of secondary school students who understand the concept of independence is much lower than the latest NAEP results would lead us to believe and, more generally, point to the difficulty of assessing conceptual understanding with multiple-choice items.
In probabilistic categorization tasks, the correct category is determined only probabilistically by the stimulus pattern Data from such experiments have been successfully accounted for by a simple network model, but have posed difficulties for exemplar models In the present article, we consider an exemplar model, CLEM (concept learning by exemplar memorization), which differs from previously tested exemplar models in that exemplar traces are assumed to be stored only when the subject has guessed or made a classification error Fits of CLEM to both learning and test data were comparable to those of the network model, and better than those obtained for a version of CLEM in which encoding was independent of the subject's response The implications of these results for the processes underlying classification decisions are discussed
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.