Search citation statements
Paper Sections
Citation Types
Year Published
Publication Types
Relationship
Authors
Journals
ObjectivesThis study aimed to describe how health researchers identify and counteract fraudulent responses when recruiting participants online.DesignScoping review.Eligibility criteriaPeer-reviewed studies published in English; studies that report on the online recruitment of participants for health research; and studies that specifically describe methodologies or strategies to detect and address fraudulent responses during the online recruitment of research participants.Sources of evidenceNine databases, including Medline, Informit, AMED, CINAHL, Embase, Cochrane CENTRAL, IEEE Xplore, Scopus and Web of Science, were searched from inception to April 2024.Charting methodsTwo authors independently screened and selected each study and performed data extraction, following the Joanna Briggs Institute’s methodological guidance for scoping reviews and the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews guidelines. A predefined framework guided the evaluation of fraud identification and mitigation strategies within the studies included. This framework, adapted from a participatory mapping study that identified indicators of fraudulent survey responses, allowed for systematic assessment and comparison of the effectiveness of various antifraud strategies across studies.Results23 studies were included. 18 studies (78%) reported encountering fraudulent responses. Among the studies reviewed, the proportion of participants excluded for fraudulent or suspicious responses ranged from as low as 3% to as high as 94%. Survey completion time was used in six studies (26%) to identify fraud, with completion times under 5 min flagged as suspicious. 12 studies (52%) focused on non-confirming responses, identifying implausible text patterns through specific questions, consistency checks and open-ended questions. Four studies examined temporal events, such as unusual survey completion times. Seven studies (30%) reported on geographical incongruity, using IP address verification and location screening. Incentives were reported in 17 studies (73%), with higher incentives often increasing fraudulent responses. Mitigation strategies included using in-built survey features like Completely Automated Public Turing test to tell Computers and Humans Apart (34%), manual verification (21%) and video checks (8%). Most studies recommended multiple detection methods to maintain data integrity.ConclusionThere is insufficient evaluation of strategies to mitigate fraud in online health research, which hinders the ability to offer evidence-based guidance to researchers on their effectiveness. Researchers should employ a combination of strategies to counteract fraudulent responses when recruiting online to optimise data integrity.
ObjectivesThis study aimed to describe how health researchers identify and counteract fraudulent responses when recruiting participants online.DesignScoping review.Eligibility criteriaPeer-reviewed studies published in English; studies that report on the online recruitment of participants for health research; and studies that specifically describe methodologies or strategies to detect and address fraudulent responses during the online recruitment of research participants.Sources of evidenceNine databases, including Medline, Informit, AMED, CINAHL, Embase, Cochrane CENTRAL, IEEE Xplore, Scopus and Web of Science, were searched from inception to April 2024.Charting methodsTwo authors independently screened and selected each study and performed data extraction, following the Joanna Briggs Institute’s methodological guidance for scoping reviews and the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews guidelines. A predefined framework guided the evaluation of fraud identification and mitigation strategies within the studies included. This framework, adapted from a participatory mapping study that identified indicators of fraudulent survey responses, allowed for systematic assessment and comparison of the effectiveness of various antifraud strategies across studies.Results23 studies were included. 18 studies (78%) reported encountering fraudulent responses. Among the studies reviewed, the proportion of participants excluded for fraudulent or suspicious responses ranged from as low as 3% to as high as 94%. Survey completion time was used in six studies (26%) to identify fraud, with completion times under 5 min flagged as suspicious. 12 studies (52%) focused on non-confirming responses, identifying implausible text patterns through specific questions, consistency checks and open-ended questions. Four studies examined temporal events, such as unusual survey completion times. Seven studies (30%) reported on geographical incongruity, using IP address verification and location screening. Incentives were reported in 17 studies (73%), with higher incentives often increasing fraudulent responses. Mitigation strategies included using in-built survey features like Completely Automated Public Turing test to tell Computers and Humans Apart (34%), manual verification (21%) and video checks (8%). Most studies recommended multiple detection methods to maintain data integrity.ConclusionThere is insufficient evaluation of strategies to mitigate fraud in online health research, which hinders the ability to offer evidence-based guidance to researchers on their effectiveness. Researchers should employ a combination of strategies to counteract fraudulent responses when recruiting online to optimise data integrity.
Background The growth in online qualitative research and data collection provides several advantages for health service researchers and participants, including convenience and extended geographic reach. However, these online processes can also present unexpected challenges, including instances of participant fraud or scam behaviour. This study describes an incident of participant fraud identified during online focus group discussions and interviews for a PhD health services research project on paediatric neurodevelopmental care. Methods We aimed to recruit carers of Australian children with neurodevelopmental disorders. Potential participants were recruited via a publicly available social media advert on Facebook offering $50 AUD compensation. Those who expressed interest via email (n = 254) were sent a pre-interview Qualtrics survey to complete. We identified imposters at an early stage via inconsistencies in their self-reported geographical location and that captured by the survey as well as recognition of suspicious actions before, during and after focus group discussions and interviews. Results Interest in participation was unexpectedly high. We determined that all potential participants were likely imposters, posing as multiple individuals and using different IP addresses across Nigeria, Australia, and the United States. In doing so, we were able to characterise several “red flags” for identifying imposter participants, particularly those posing as multiple individuals. These comprise a combination of factors including large volumes and strange timings of email responses, unlikely demographic characteristics, short or vague interviews, a preference for nonvisual participation, fixation on monetary compensation, and inconsistencies in reported geographical location. Additionally, we propose several strategies to combat this issue such as providing proof of location or eligibility during recruitment and data collection, examining email and consent form patterns, and comparing demographic data with regional statistics. Conclusions The emergent risk of imposter participants is an important consideration for those seeking to conduct health services research using qualitative approaches in online environments. Methodological design choices intended to improve equity and access for the target population may have an unintended consequence of improving access for fraudulent actors unless appropriate risk mitigation strategies are also employed. Lessons learned from this experience are likely to be valuable for novice health service researchers involved in online focus group discussions and interviews.
Background: Transgender and gender nonbinary (TNB) people experience economic and psychosocial inequities that make them particularly vulnerable to financial and mental health harms exacerbated by the COVID-19 pandemic. Sustainable, multilevel interventions are needed to address these harms. The onset of the COVID-19 pandemic galvanized many TNB-led organizations to provide emergency financial and peer support for TNB people negatively impacted by the pandemic. However, the efficacy of these interventions has not been evaluated. Objective:The CARES study seeks to assess the efficacy of feasible, acceptable, community-derived interventions to reduce economic and psychological harms experienced by transgender people in the wake of COVID-19. Methods:The study aims to (1) compare the efficacy of microgrants with or without peer mentoring to reduce psychological distress and increase COVID-19 prevention behaviors; (2) examine mechanisms by which microgrants with or without peer mentoring may impact psychological distress; and (3) explore participants' intervention experiences and perceived efficacy. We will enroll 360 TNB adults into an embedded, mixed methods, 3-arm, 12-month randomized controlled trial. Participants will be randomized 1:1:1 to the following arms: (a) a single microgrant plus monthly financial literacy education (enhanced usual care); (b) enhanced usual care plus monthly microgrants (extended microgrants); or (c) extended microgrants combined with peer mentoring (peer mentoring). All intervention arms last for 6 months, and participants complete semi-annual, web-based surveys at 0, 6, and 12 months as well as brief process measures at 3 and 6 months. A subset of 36 participants, 12 per arm, will complete longitudinal in-depth interviews at 3 and 9 months.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.