The ability for an organization to innovate has become one of the most important capabilities needed in the new knowledge economy. An organization's culture of innovation, in particular, predicts organizational innovativeness across multiple industries. While researchers have developed instruments to measure culture of innovation to inform organizational opportunities for improvement, few of these instruments have been validated or replicated beyond their initial use. The current article, which is part one of a two‐part investigation, employs confirmatory factor analytic methods to validate the factor structure of the six models defined in the Innovation Quotient instrument developed by Rao and Weintraub (2013) and assess the extent to which the models are reliable across organizational groups. While each model demonstrated adequate model fit, a lack of discriminant validity was identified for each model, as well as a lack of reliability across some organizational groups. Recommendations for model respecification are presented.
Sharing similar cognitive structures among team members is one key element for the collective to solve problems and work more efficiently (DeChurch & Mesmer‐Magnus, 2010). Research has shown that team cognition is positively associated with team performance in both organizational and educational settings. Team shared cognition constructs are relatively new constructs and have been identified in the literature as team mental models, shared mental models, information sharing, transactive memory systems, cognitive congruence, and group learning. Cannon‐Bowers and Salas (2001) called for better measures of shared cognition to be developed, partially through the integration of shared cognition measures across disciplines. The purpose of this meta‐analysis is to look at these six team cognition constructs in an effort to help identify which measure, if any, results in predicting team performance best. Results indicated that information sharing was statistically significant compared to team mental memory and group learning, and marginally significant compared to transactive memory systems. Additionally, shared mental models and cognitive congruence showed higher associations with performance compared to team mental models, group learning, and transactive memory systems.
While a majority of the literature in the field of performance measurement has focused on what organizations should measure, report, and use in order to improve overall organizational results, organizational leaders still experience difficulty in identifying performance measures to track the achievement of organizational strategy. This article describes the methods used to measure organizational strategy and program implementation and proposes the use of a performance‐based rubric to better enable organizational leaders to capture and identify the essence of organizational strategy as implemented and to attribute performance outcomes to specific strategic actions. Implications for organizational leaders and researchers from all sectors are presented.
Although it is sometimes recommended that performance improvement (PI) professionals include experimental research designs in their repertoire of PI tools and methods, it has been long understood that experimental designs can be difficult to implement due to impediments resulting from the complex nature of the organizational settings. However, the utilization of case study research has proven to be an effective alternative to aid in the identification of strengths and opportunities for the improvement of organizational procedures, policies, processes, or programs. Case study research helps managers and practitioners make sense of real world problems. This article presents a summary of steps in the design of case study research and provides examples of how these methods have been used within organizational settings.Implications for PI practitioners are provided. PERFORMANCE IMPROVEMENT PROFESSIONALS use a variety of processes, methods, and tools to help in the understanding of problems, to help design interventions that address these problems, and to evaluate the effectiveness of the selected interventions to ensure a proper return on expectations. Often these information-gathering methods and tools try to emulate those long-utilized by experimental researchers. However, due to the complex nature of organizational phenomena, true experimental designs often gain little traction as a viable option for performance improvement (PI) practitioners seeking tools that are useful and cost-effective to implement (Mulder, 1999 ). As the key purpose of any improvement effort is to learn from the past in order to predict or change the future, or to learn from best practices identified in one work unit in order to scale up or replicate in another work unit, it is important to invest in methods that will enable the practitioner to easily capture information from the particular in order to generalize to other relevant work groups. Case study research is an often neglected but useful research methodology that can be utilized to accomplish this. It is an effective tool for managers and practitioners to make better sense of workplace issues, thus further supporting problem solving and PI efforts, and one that all practitioners should have in their PI toolboxes.The purpose of this article is to briefly summarize the key steps involved in the design of case study research, and to present examples of case studies in which the results were used to inform the design of policies, practices, or other processes to improve performance in organizations. Although it is not the goal of this article to present exemplars in case study design, the reader may be able to identify commonalities between the issues and needs inherent in each of the cases and the reader's own organization, and to extract applications that will help in the translation of case study findings into learning opportunities.
While previous research has illustrated the importance of measuring and interpreting results of organizational culture of innovation, few of the instruments in existence, and particularly the Innovation Quotient instrument, have yet to be validated beyond their initial use. In part one of the present investigation, Danks, Rao, and Allen ( 2017 ) sought to validate the Innovation Quotient instrument and assess the extent to which it was reliable across relevant organizational groups. In that study, it was identifi ed that each of the six models demonstrated adequate model fi t using confi rmatory factor analyses, but each of the models also presented a lack of evidence for discriminant validity and reliability across some of the organizational groups. As multicollinearity was excluded as a possible contributor to the issue, it was proposed that future research investigate alternative models, such as the specifi cation of a global, six-factor construct of innovation culture instead of being specifi ed as six individual, three-factor models. It was also proposed that additional research explore possible explanations for inconsistencies in score reliability estimates across groups, particularly for the values , resources , processes , and climate blocks, as multiple reliability thresholds were not met.Th e purpose of Part 2 of this investigation is to explore and validate alternative models of a culture of innovation, using Rao and Weintraub ' s ( 2013 ) Innovation Culture instrument. To investigate the plausibility of the proposed integrated, six-factor model, a confi rmatory factor analysis and reliability analysis will be conducted. In the absence of evidence for convergent validity, discriminant validity, or reliability, an alternative factor structure of culture of innovation will be investigated using an exploratory factor analysis (EFA). A subsequent confi rmatory factor FIGURE 1 . BUILDING BLOCKS TO A CULTURE OF INNOVATION While previous research has illustrated the importance of measuring and interpreting results of organizational culture of innovation, few of the instruments in existence, and particularly the Innovation Quotient instrument, have yet to be validated beyond their initial use. 31analysis (CFA) will be conducted to identify evidence for convergent and discriminant validity of the alternate model, and estimates of reliability across multiple organizational groups will be analyzed. Review of the LiteratureA full review of the literature-including a discussion of the construct of a culture of innovation, a review of available instruments, previous efforts to validate those instruments, and a detailed summary of the Innovation Quotient instrument-is presented in part one of this study (Danks et al., 2017 ). This review of the literature will build on those findings and explore the tenability of a global measure of the culture of innovation, which is consistent with previously discussed instruments (i.e., Aiman-Smith, Goodrich, Roberts, & Scinta, 2005 ;Dobni, 2008 ). Aiman-Smith et al. 's ( 2005 ) ...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.