The rejection of reliability and validity in qualitative inquiry in the 1980s has resulted in an interesting shift for “ensuring rigor” from the investigator's actions during the course of the research, to the reader or consumer of qualitative inquiry. The emphasis on strategies that are implemented during the research process has been replaced by strategies for evaluating trustworthiness and utility that are implemented once a study is completed. In this article, we argue that reliability and validity remain appropriate concepts for attaining rigor in qualitative research. We argue that qualitative researchers should reclaim responsibility for reliability and validity by implementing verification strategies integral and self-correcting during the conduct of inquiry itself. This ensures the attainment of rigor using strategies inherent within each qualitative design, and moves the responsibility for incorporating and maintaining reliability and validity from external reviewers' judgements to the investigators themselves. Finally, we make a plea for a return to terminology for ensuring rigor that is used by mainstream science.
Criteria for determining the trustworthiness of qualitative research were introduced by Guba and Lincoln in the 1980s when they replaced terminology for achieving rigor, reliability, validity, and generalizability with dependability, credibility, and transferability. Strategies for achieving trustworthiness were also introduced. This landmark contribution to qualitative research remains in use today, with only minor modifications in format. Despite the significance of this contribution over the past four decades, the strategies recommended to achieve trustworthiness have not been critically examined. Recommendations for where, why, and how to use these strategies have not been developed, and how well they achieve their intended goal has not been examined. We do not know, for example, what impact these strategies have on the completed research. In this article, I critique these strategies. I recommend that qualitative researchers return to the terminology of social sciences, using rigor, reliability, validity, and generalizability. I then make recommendations for the appropriate use of the strategies recommended to achieve rigor: prolonged engagement, persistent observation, and thick, rich description; inter-rater reliability, negative case analysis; peer review or debriefing; clarifying researcher bias; member checking; external audits; and triangulation.
Determining Sample SizeL ast month I attended a conference presentation given by a senior qualitative researcher. His project used a longitudinal design with multiple interviews conducted over a period of months. In response to a question regarding sample size, he explained that he obtained the number of participants necessary for his study by looking at a table that Morse had published (see Morse, 1994, p. 225). He had used this number in his proposal for estimating the number of participants without considering the number of interviews. Because his design used many more interviews than the studies in Morse's table, he was clearly going to drown in data in a very short time. Evidently, it is time to clarify the issues in sample size once and for all. By clarifying the assumptions underlying sample size recommendations, I will not feel quite so responsible when someone takes my work at face value.Estimating the number of participants in a study required to reach saturation depends on a number of factors, including the quality of data, the scope of the study, the nature of the topic, the amount of useful information obtained from each participant, the number of interviews per participant, the use of shadowed data, and the qualitative method and study design used. Once all of these factors are considered, you may not be much further ahead in predicting the exact number, but you will be able to defend the estimated range presented in your proposal. Because the actual number of participants is still an unknown, should data collection not proceed smoothly when writing the proposal, it is wise to overestimate the sample size rather than to underestimate so that funds are available to collect all the necessary data.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.