While social capital is emerging as a theory rich in its potential for understanding the relationships between societal norms and values, and community outcomes, clarification of its measures remains unresolved. This article attempts to contribute to this measurement issue by presenting a reliable self-report instrument for measuring social capital in societal environments. The instrument is grounded in the theoretical and measurement literature of social capital, and proposes an evolving conceptual framework of social capital's dimensions, determinants and outcomes. The instrument was empirically validated using data collected in the African Republics of Ghana and Uganda. The article presents results of exploratory and confirmatory factor analyses that substantiate a number of robust dimensions of social capital, prominent at the household and aggregate levels, and across the two country data sets. Both recommended and suggested survey questions are documented for use in subsequent research relevant to measuring social capital. Regression analyses supporting the validity of the measures are included, as are reliability measures.
Background: There have been increasing calls for integrating computational thinking and computing into school science, mathematics, and engineering classrooms. The learning goals of the curriculum in this study included learning about both computational thinking and climate science. Including computer science in science classrooms also means a shift in the focus on design and creation of artifacts and attendant practices. One such design practice, widespread in the design and arts fields, is critique. This paper explores the role of critique in two urban, heterogenous 8th grade science classrooms in which students engaged in creating computer games on the topic of climate systems and climate change. It explores and compares how practices of critique resulted from curricular decisions to (i) scaffold intentional critique sessions for student game designers and (ii) allow for spontaneous feedback as students interacted with each other and their games during the process of game creation. Results: Although we designed formal opportunities for critique, the participatory dimension of the project meant that students were free to critique each other's games at any time during the building process and did so voluntarily. Data indicate that students focused much more on the game play dimension of the design than the science, particularly in those critique sessions that were student-initiated. Despite the de-emphasis on science in spontaneous critiques, students still focused on several dimensions of computational thinking, considering user experience, troubleshooting, modeling, and elegance of solutions. Conclusions: Students making games about science topics should have opportunities for both formal and spontaneous critiques. Spontaneous critiques allow for students to be authorities of knowledge and to determine what is acceptable and what is not. However, formal, teacher-designed critiques may be necessary for students to focus on science as part of the critique. Furthermore, one of the benefits to critiquing others was that students were able to see what others had done, how they had set up their games, the content they included, and how they had programmed certain features. Lastly, critiques can help facilitate iteration as students work to improve their games.
PurposeThe purpose of this paper is to examine critically the accuracy of expert judgment, drawing on empirical evidence and theory from multiple disciplines. It suggests that counsel offered with confidence by experts might, under certain circumstances, be without merit, and presents approaches to assessing the accuracy of such counsel.Design/methodology/approachThe paper synthesizes research findings on expert judgment drawn from multiple fields, including psychology, criminal justice, political science, and decision analysis. It examines internal and external factors affecting the veracity of what experts may judge to be matters of common sense, using a semiotic structure.FindingsIn multiple domains, including management, expert accuracy is, in general, no better than chance. Increased experience, however, is often accompanied by an unjustified increase in self‐confidence.Practical implicationsWhile the dynamic nature of decision making in organizations renders the development of a codified, reliable knowledge base potentially unachievable, there is value in recognizing these limitations, and employing tactics to explore more thoroughly both problem and solutions spacesOriginality/valueThe paper's originality lies in its integration of recent, multiple‐disciplinary research as a basis for persuading decision makers of the perils of accepting expert advice without skepticism.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.