Human factors research popularly employs perception-based techniques to investigate team performance and its dependency to cognitive processes. Such studies frequently rely upon either observer-based or self-assessment techniques to collect data. In this study, we examined behavioral observer ratings and self-assessment ratings for measuring team performance in virtual teams, with team performance regarded as a combination of task outcome and team cognition. Juxtaposing self-assessments and observer ratings from a quasi-experiment comparing team performance rating techniques reveals that they indeed produce overall similar results, with both singling out teamwork effectiveness ratings as the strongest contributor to overall team performance. However, the comparisons show remarkably low correlation on individual questionnaire items. The most striking difference is that the team members' self-assessments of workload are lower than the corresponding observer ratings. In particular, the selfassessments do not correlate at all with overall team performance, whereas the observers' workload ratings are more consistent with contemporary research that suggests a strong correlation between workload and team performance, suggesting that observer-based techniques are more reliable than self-assessments for assessing workload. For other ratings, the results show that the two techniques are fairly equal, suggesting that the choice between methods to employ can be deferred to other considerations such as obtrusiveness, accessibility, and resource availability.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.