OBJECTIVE:Crowdsourcing research allows investigators to engage thousands of people to provide either data or data analysis. However, prior work has not documented the use of crowdsourcing in health and medical research. We sought to systematically review the literature to describe the scope of crowdsourcing in health research and to create a taxonomy to characterize past uses of this methodology for health and medical research. DATA SOURCES: PubMed, Embase, and CINAHL through March 2013. STUDY ELIGIBILITY CRITERIA: Primary peerreviewed literature that used crowdsourcing for health research. STUDY APPRAISAL AND SYNTHESIS METHODS: Two authors independently screened studies and abstracted data, including demographics of the crowd engaged and approaches to crowdsourcing. RESULTS: Twenty-one health-related studies utilizing crowdsourcing met eligibility criteria. Four distinct types of crowdsourcing tasks were identified: problem solving, data processing, surveillance/monitoring, and surveying. These studies collectively engaged a crowd of >136,395 people, yet few studies reported demographics of the crowd. Only one (5 %) reported age, sex, and race statistics, and seven (33 %) reported at least one of these descriptors. Most reports included data on crowdsourcing logistics such as the length of crowdsourcing (n=18, 86 %) and time to complete crowdsourcing task (n=15, 71 %). All articles (n=21, 100 %) reported employing some method for validating or improving the quality of data reported from the crowd. LIMITATIONS: Gray literature not searched and only a sample of online survey articles included. CONCLUSIONS AND IMPLICATIONS OF KEY FINDINGS:Utilizing crowdsourcing can improve the quality, cost, and speed of a research project while engaging large segments of the public and creating novel science. Standardized guidelines are needed on crowdsourcing metrics that should be collected and reported to provide clarity and comparability in methods. INTRODUCTIONCrowdsourcing is an approach to accomplishing a task by opening up its completion to broad sections of the public. Innovation tournaments, prizes for solving an engineering problem, or paying online participants for categorizing images are examples of crowdsourcing. What ties these approaches together is that the task is outsourced with little restriction on who might participate. Despite the potential of crowdsourcing, little is known about the applications and feasibility of this approach for collecting or analyzing health and medical research data where the stakes are high for data quality and validity.One of the most celebrated crowdsourcing tasks was the prize established in 1714 by Britain's Parliament in the Longitude Act, offered to anyone who could solve the problem of identifying a ship's longitudinal position.1 The Audubon Society's Christmas Bird Count began in 1900 and continues to this day as a way for "citizen scientists" to provide data that can be used for studying bird population trends.2 However, today the world has 2.3 billion Internet users an...
Little is known about how real-time online rating platforms such as Yelp may complement the Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) survey, the U.S. standard for evaluating patient experiences after hospitalization. We compared the content of Yelp narrative reviews of hospitals to the domains covered by HCAHPS. While the domains included in Yelp reviews covered the majority of HCAHPS domains, Yelp reviews covered an additional twelve domains not reflected in HCAHPS. The majority of Yelp topics most strongly correlated with positive or negative reviews are not measured or reported by HCAHPS. Yelp provides a large collection of patient and caregiver-centered experiences that can be analyzed with natural language processing methods to identify for policy makers what measures of hospital quality matter most to patients and caregivers while also providing actionable feedback for hospitals.
Objective To compare the effectiveness of shared decision making with usual care in choice of admission for observation and further cardiac testing or for referral for outpatient evaluation in patients with possible acute coronary syndrome.Design Multicenter pragmatic parallel randomized controlled trial.Setting Six emergency departments in the United States.Participants 898 adults (aged >17 years) with a primary complaint of chest pain who were being considered for admission to an observation unit for cardiac testing (451 were allocated to the decision aid and 447 to usual care), and 361 emergency clinicians (emergency physicians, nurse practitioners, and physician assistants) caring for patients with chest pain.Interventions Patients were randomly assigned (1:1) by an electronic, web based system to shared decision making facilitated by a decision aid or to usual care. The primary outcome, selected by patient and caregiver advisers, was patient knowledge of their risk for acute coronary syndrome and options for care; secondary outcomes were involvement in the decision to be admitted, proportion of patients admitted for cardiac testing, and the 30 day rate of major adverse cardiac events.Results Compared with the usual care arm, patients in the decision aid arm had greater knowledge of their risk for acute coronary syndrome and options for care (questions correct: decision aid, 4.2 v usual care, 3.6; mean difference 0.66, 95% confidence interval 0.46 to 0.86), were more involved in the decision (observing patient involvement scores: decision aid, 18.3 v usual care, 7.9; 10.3, 9.1 to 11.5), and less frequently decided with their clinician to be admitted for cardiac testing (decision aid, 37% v usual care, 52%; absolute difference 15%; P<0.001). There were no major adverse cardiac events due to the intervention.Conclusions Use of a decision aid in patients at low risk for acute coronary syndrome increased patient knowledge about their risk, increased engagement, and safely decreased the rate of admission to an observation unit for cardiac testing.Trial registration ClinicalTrials.gov NCT01969240.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.