BackgroundTraditional adverse event (AE) reporting systems have been slow in adapting to online AE reporting from patients, relying instead on gatekeepers, such as clinicians and drug safety groups, to verify each potential event. In the meantime, increasing numbers of patients have turned to social media to share their experiences with drugs, medical devices, and vaccines.ObjectiveThe aim of the study was to evaluate the level of concordance between Twitter posts mentioning AE-like reactions and spontaneous reports received by a regulatory agency.MethodsWe collected public English-language Twitter posts mentioning 23 medical products from 1 November 2012 through 31 May 2013. Data were filtered using a semi-automated process to identify posts with resemblance to AEs (Proto-AEs). A dictionary was developed to translate Internet vernacular to a standardized regulatory ontology for analysis (MedDRA®). Aggregated frequency of identified product-event pairs was then compared with data from the public FDA Adverse Event Reporting System (FAERS) by System Organ Class (SOC).ResultsOf the 6.9 million Twitter posts collected, 4,401 Proto-AEs were identified out of 60,000 examined. Automated, dictionary-based symptom classification had 72 % recall and 86 % precision. Similar overall distribution profiles were observed, with Spearman rank correlation rho of 0.75 (p < 0.0001) between Proto-AEs reported in Twitter and FAERS by SOC.ConclusionPatients reporting AEs on Twitter showed a range of sophistication when describing their experience. Despite the public availability of these data, their appropriate role in pharmacovigilance has not been established. Additional work is needed to improve data acquisition and automation.
BackgroundPrescription opioid diversion and abuse are major public health issues in the United States and internationally. Street prices of diverted prescription opioids can provide an indicator of drug availability, demand, and abuse potential, but these data can be difficult to collect. Crowdsourcing is a rapid and cost-effective way to gather information about sales transactions. We sought to determine whether crowdsourcing can provide accurate measurements of the street price of diverted prescription opioid medications.ObjectiveTo assess the possibility of crowdsourcing black market drug price data by cross-validation with law enforcement officer reports.MethodsUsing a crowdsourcing research website (StreetRx), we solicited data about the price that site visitors paid for diverted prescription opioid analgesics during the first half of 2012. These results were compared with a survey of law enforcement officers in the Researched Abuse, Diversion, and Addiction-Related Surveillance (RADARS) System, and actual transaction prices on a “dark Internet” marketplace (Silk Road). Geometric means and 95% confidence intervals were calculated for comparing prices per milligram of drug in US dollars. In a secondary analysis, we compared prices per milligram of morphine equivalent using standard equianalgesic dosing conversions.ResultsA total of 954 price reports were obtained from crowdsourcing, 737 from law enforcement, and 147 from the online marketplace. Correlations between the 3 data sources were highly linear, with Spearman rho of 0.93 (P<.001) between crowdsourced and law enforcement, and 0.98 (P<.001) between crowdsourced and online marketplace. On StreetRx, the mean prices per milligram were US$3.29 hydromorphone, US$2.13 buprenorphine, US$1.57 oxymorphone, US$0.97 oxycodone, US$0.96 methadone, US$0.81 hydrocodone, US$0.52 morphine, and US$0.05 tramadol. The only significant difference between data sources was morphine, with a Drug Diversion price of US$0.67/mg (95% CI 0.59-0.75) and a Silk Road price of US$0.42/mg (95% CI 0.37-0.48). Street prices generally followed clinical equianalgesic potency.ConclusionsCrowdsourced data provide a valid estimate of the street price of diverted prescription opioids. The (ostensibly free) black market was able to accurately predict the relative pharmacologic potency of opioid molecules.
BackgroundPreparing and submitting a voluntary adverse event (AE) report to the US Food and Drug Administration (FDA) for a medical device typically takes 40 min. User-friendly Web and mobile reporting apps may increase efficiency. Further, coupled with strategies for direct patient involvement, patient engagement in AE reporting may be improved. In 2012, the FDA Center for Devices and Radiologic Health (CDRH) launched a free, public mobile AE reporting app, MedWatcher, for patients and clinicians. During the same year, a patient community on Facebook adopted the app to submit reports involving a hysteroscopic sterilization device, brand name Essure®.MethodsPatient community outreach was conducted to administrators of the group “Essure Problems” (approximately 18,000 members as of June 2015) to gather individual case safety reports (ICSRs). After agreeing on key reporting principles, group administrators encouraged members to report via the app. Semi-structured forms in the app mirrored fields of the MedWatch 3500 form. ICSRs were transmitted to CDRH via an electronic gateway, and anonymized versions were posted in the app. Data collected from May 11, 2013 to December 7, 2014 were analyzed. Narrative texts were coded by trained and certified MedDRA coders (version 17). Descriptive statistics and metrics, including VigiGrade completeness scores, were analyzed. Various incentives and motivations to report in the Facebook group were observed.ResultsThe average Essure AE report took 11.4 min (±10) to complete. Submissions from 1349 women, average age 34 years, were analyzed. Serious events, including hospitalization, disability, and permanent damage after implantation, were reported by 1047 women (77.6 %). A total of 13,135 product–event pairs were reported, comprising 327 unique preferred terms, most frequently fatigue (n = 491), back pain (468), and pelvic pain (459). Important medical events (IMEs), most frequently mental impairment (142), device dislocation (108), and salpingectomy (62), were reported by 598 women (44.3 %). Other events of interest included loss of libido (n = 115); allergy to metals (109), primarily nickel; and alopecia (252). VigiGrade completeness scores were high, averaging 0.80 (±0.15). Reports received via the mobile app were considered “well documented” 55.9 % of the time, compared with an international average of 13 % for all medical products. On average, there were 15 times more reports submitted per month via the app with patient community support versus traditional pharmacovigilance portals.ConclusionsOutreach via an online patient community, coupled with an easy-to-use app, allowed for rapid and detailed ICSRs to be submitted, with gains in efficiency. Two-way communication and public posting of narratives led to successful engagement within a Motivation-Incentive-Activation-Behavior framework, a conceptual model for successful crowdsourcing. Reports submitted by patients were considerably more complete than those submitted by physicians in routine spontaneous reports. Further researc...
BackgroundThe nonmedical use of pharmaceutical products has become a significant public health concern. Traditionally, the evaluation of nonmedical use has focused on controlled substances with addiction risk. Currently, there is no effective means of evaluating the nonmedical use of noncontrolled antidepressants.ObjectiveSocial listening, in the context of public health sometimes called infodemiology or infoveillance, is the process of identifying and assessing what is being said about a company, product, brand, or individual, within forms of electronic interactive media. The objectives of this study were (1) to determine whether content analysis of social listening data could be utilized to identify posts discussing potential misuse or nonmedical use of bupropion and two comparators, amitriptyline and venlafaxine, and (2) to describe and characterize these posts.MethodsSocial listening was performed on all publicly available posts cumulative through July 29, 2015, from two harm-reduction Web forums, Bluelight and Opiophile, which mentioned the study drugs. The acquired data were stripped of personally identifiable identification (PII). A set of generic, brand, and vernacular product names was used to identify product references in posts. Posts were obtained using natural language processing tools to identify vernacular references to drug misuse-related Preferred Terms from the English Medical Dictionary for Regulatory Activities (MedDRA) version 18 terminology. Posts were reviewed manually by coders, who extracted relevant details.ResultsA total of 7756 references to at least one of the study antidepressants were identified within posts gathered for this study. Of these posts, 668 (8.61%, 668/7756) referenced misuse or nonmedical use of the drug, with bupropion accounting for 438 (65.6%, 438/668). Of the 668 posts, nonmedical use was discouraged by 40.6% (178/438), 22% (22/100), and 18.5% (24/130) and encouraged by 12.3% (54/438), 10% (10/100), and 10.8% (14/130) for bupropion, amitriptyline, and venlafaxine, respectively. The most commonly reported desired effects were similar to stimulants with bupropion, sedatives with amitriptyline, and dissociatives with venlafaxine. The nasal route of administration was most frequently reported for bupropion, whereas the oral route was most frequently reported for amitriptyline and venlafaxine. Bupropion and venlafaxine were most commonly procured from health care providers, whereas amitriptyline was most commonly obtained or stolen from a third party. The Fleiss kappa for interrater agreement among 20 items with 7 categorical response options evaluated by all 11 raters was 0.448 (95% CI 0.421-0.457).ConclusionsSocial listening, conducted in collaboration with harm-reduction Web forums, offers a valuable new data source that can be used for monitoring nonmedical use of antidepressants. Additional work on the capabilities of social listening will help further delineate the benefits and limitations of this rapidly evolving data source.
Abstract, paragraph 4, lines 3-4: The following sentence, which previously read:''Automated, dictionary-based symptom classification had 72 % recall and 86 % precision.'' Should read:''Automated, dictionary-based symptom classification had 86 % recall and 72 % precision.''The online version of the original article can be found under
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.