A basic issue in social influence is how best to change one's judgment in response to learning the opinions of others. This article examines the strategies that people use to revise their quantitative estimates on the basis of the estimates of another person. The authors note that people tend to use 2 basic strategies when revising estimates: choosing between the 2 estimates and averaging them. The authors developed the probability, accuracy, redundancy (PAR) model to examine the relative effectiveness of these two strategies across judgment environments. A surprising result was that averaging was the more effective strategy across a wide range of commonly encountered environments. The authors observed that despite this finding, people tend to favor the choosing strategy. Most participants in these studies would have achieved greater accuracy had they always averaged. The identification of intuitive strategies, along with a formal analysis of when they are accurate, provides a basis for examining how effectively people use the judgments of others. Although a portfolio of strategies that includes averaging and choosing can be highly effective, the authors argue that people are not generally well adapted to the environment in terms of strategy selection.
People are inaccurate judges of how their abilities compare to others'. Kruger and Dunning (1999; argue that most inaccuracy is attributable to unskilled performers' lack of metacognitive skill to evaluate their performance. They overestimate their standing, whereas skilled performers accurately predict theirs. Consequently, the majority of people believe they are above average. However, not all tasks show this bias. In a series of ten tasks across three studies, we show that moderately difficult tasks produce little overall bias and little difference in accuracy between best and worst performers, and that more difficult tasks produce a negative bias, making the worst performers appear more accurate in their judgments. This pattern suggests that judges at all skill levels are subject to similar degrees of inaccuracy and bias. Although differences in metacognitive ability may play a role in the accuracy of interpersonal comparisons, our results indicate that, for the most part, the skilled and the unskilled are equally unaware of how their performances compare to those of others. Skilled or Unskilled 3Skilled or Unskilled, but Still Unaware of It: How Perceptions of Difficulty Drive Miscalibration in Relative ComparisonsResearch on overconfidence has found that subjective and objective measures of performance are poorly correlated (see Alba & Hutchinson, 2000 for a comprehensive review).While most of this research compares confidence in one's estimates with one's actual performance, one particular vein focuses on people's accuracy in estimating their ability compared to their peers. Such judgments are important in many contexts. In many societies, success in school, jobs, entrepreneurship, sports, and many other activities are largely a function of how one's ability and performance compare to others'. Thus, the ability to estimate one's relative standing can have a major impact on one's life choices and one's satisfaction with those choices.The most common finding in this area is a "better-than-average" effect: On average, people think that they are above average. However, this tendency is not uniform. The overestimation comes mostly from poor performers. Figure 1 summarizes results from studies by Kruger and Dunning (1999) showing this effect. Kruger and Dunning (1999; argue that this happens because people who perform poorly at a task also lack the metacognitive skill to realize that they have performed poorly. On the other hand, people who are more skilled have both the ability to perform well and the ability to accurately assess the superiority of their performance. Borrowing from the title of Kruger and Dunning's paper, we refer to this as the "unskilled-unaware hypothesis."The unskilled-unaware hypothesis has logical and intuitive appeal. As Kruger and Dunning (1999) point out, the skills it takes to write a grammatically correct sentence are the same skills it takes to recognize a grammatically correct sentence. The most incompetent Skilled or Unskilled 4 individuals overstate their abilities in ma...
Averaging estimates is an effective way to improve accuracy when combining expert judgments, integrating group members' judgments, or using advice to modify personal judgments. If the estimates of two judges ever fall on different sides of the truth, which we term bracketing, averaging must outperform the average judge for convex loss functions, such as mean absolute deviation (MAD). We hypothesized that people often hold incorrect beliefs about averaging, falsely concluding that the average of two judges' estimates would be no more accurate than the average judge. The experiments confirmed that this misconception was common across a range of tasks that involved reasoning from summary data (Experiment 1), from specific instances (Experiment 2), and conceptually (Experiment 3). However, this misconception decreased as observed or assumed bracketing rate increased (all three studies) and when bracketing was made more transparent (Experiment 2). Experiment 4 showed that flawed inferential rules and poor extensional reasoning abilities contributed to the misconception. We conclude by describing how people may face few opportunities to learn the benefits of averaging and how misappreciating averaging contributes to poor intuitive strategies for combining estimates.averaging opinions, combining forecasts, information aggregation, advice taking, heuristics and biases
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.