MMH was associated with improved medication adherence, perceived quality of life, and self-efficacy.Trial Registration This project was registered under http://clinicaltrials.gov/ identifier NCT01730235.
BackgroundSelf-report is the most common method of measuring medication adherence but is influenced by recall error and response bias, and it typically does not provide insight into the causes of poor adherence. Ecological momentary assessment (EMA) of health behaviors using mobile phones offers a promising alternative to assessing adherence and collecting related data that can be clinically useful for adherence problem solving.ObjectiveTo determine the feasibility of using EMA via mobile phones to assess adolescent asthma medication adherence and identify contextual characteristics of adherence decision making.MethodsWe utilized a descriptive and correlational study design to explore a mobile method of symptom and adherence assessment using an interactive voice response system. Adolescents aged 12-18 years with a diagnosis of asthma and prescribed inhalers were recruited from an academic medical center. A survey including barriers to mobile phone use, the Illness Management Survey, and the Pediatric Asthma Quality of Life Questionnaire were administered at baseline. Quantitative and qualitative assessment of asthma symptoms and adherence were conducted with daily calls to mobile phones for 1 month. The Asthma Control Test (ACT) was administered at 2 study time points: baseline and 1 month after baseline.ResultsThe sample consisted of 53 adolescents who were primarily African American (34/53, 64%) and female (31/53, 58%) with incomes US$40K/year or lower (29/53, 55%). The majority of adolescents (37/53, 70%) reported that they carried their phones with them everywhere, but only 47% (25/53) were able to use their mobile phone at school. Adolescents responded to an average of 20.1 (SD 8.1) of the 30 daily calls received (67%). Response frequency declined during the last week of the month (b=-0.29, P<.001) and was related to EMA-reported levels of rescue inhaler adherence (r= 0.33, P=.035). Using EMA, adolescents reported an average of 0.63 (SD 1.2) asthma symptoms per day and used a rescue inhaler an average of 70% of the time (SD 35%) when they experienced symptoms. About half (26/49, 53%) of the instances of nonadherence took place in the presence of friends. The EMA-measured adherence to rescue inhaler use correlated appropriately with asthma control as measured by the ACT (r=-0.33, P=.034).ConclusionsMobile phones provided a feasible method to assess asthma symptoms and adherence in adolescents. The EMA method was consistent with the ACT, a widely established measure of asthma control, and results provided valuable insights regarding the context of adherence decision making that could be used clinically for problem solving or as feedback to adolescents in a mobile or Web-based support system.
Objective To determine if ChatGPT can generate useful suggestions for improving clinical decision support (CDS) logic and to assess noninferiority compared to human-generated suggestions. Methods We supplied summaries of CDS logic to ChatGPT, an artificial intelligence (AI) tool for question answering that uses a large language model, and asked it to generate suggestions. We asked human clinician reviewers to review the AI-generated suggestions as well as human-generated suggestions for improving the same CDS alerts, and rate the suggestions for their usefulness, acceptance, relevance, understanding, workflow, bias, inversion, and redundancy. Results Five clinicians analyzed 36 AI-generated suggestions and 29 human-generated suggestions for 7 alerts. Of the 20 suggestions that scored highest in the survey, 9 were generated by ChatGPT. The suggestions generated by AI were found to offer unique perspectives and were evaluated as highly understandable and relevant, with moderate usefulness, low acceptance, bias, inversion, redundancy. Conclusion AI-generated suggestions could be an important complementary part of optimizing CDS alerts, can identify potential improvements to alert logic and support their implementation, and may even be able to assist experts in formulating their own suggestions for CDS improvement. ChatGPT shows great potential for using large language models and reinforcement learning from human feedback to improve CDS alert logic and potentially other medical areas involving complex, clinical logic, a key step in the development of an advanced learning health system.
Persistent early childhood aggression is a strong predictor of violence later in life. To determine how well general pediatricians counsel parents regarding aggression management strategies, responses to open-ended questions and endorsements of specific aggression management strategies, among 27 pediatricians were evaluated. Sixteen (59%) screened regularly for aggression and 23 (85%) counseled (rather than referred) if there were parental concerns. Pediatricians were most likely to spontaneously recommend time-outs (85%) and verbal reprimands (78%) and much less likely to recommend other strategies such as redirecting (26%, p < 0.01) and promoting empathy (22%, p < 0.001). Pediatricians did endorse other aggression management strategies, however, when specifically asked about them. Pediatricians appear to take a limited approach to counseling parents of children with hurtful behavior. To increase health care providers' role in violence prevention, more systematic efforts are needed to increase rates of screening for early childhood aggression and to broaden the scope of how pediatricians counsel parents.
Objective: To determine if ChatGPT can generate useful suggestions for improving clinical decision support (CDS) logic and to assess noninferiority compared to human-generated suggestions. Methods: We supplied summaries of CDS logic to ChatGPT, an artificial intelligence (AI) tool for question answering that uses a large language model, and asked it to generate suggestions. We asked human clinician reviewers to review the AI-generated suggestions as well as human-generated suggestions for improving the same CDS alerts, and rate the suggestions for their usefulness, acceptance, relevance, understanding, workflow, bias, inversion, and redundancy. Results: Five clinicians analyzed 36 AI-generated suggestions and 29 human-generated suggestions for 7 alerts. Of the 20 suggestions that scored highest in the survey, 9 were generated by ChatGPT. The suggestions generated by AI were found to offer unique perspectives and were evaluated as highly understandable and relevant, with moderate usefulness, low acceptance, bias, inversion, redundancy. Conclusion: AI-generated suggestions could be an important complementary part of optimizing CDS alerts, can identify potential improvements to alert logic and support their implementation, and may even be able to assist experts in formulating their own suggestions for CDS improvement. ChatGPT shows great potential for using large language models and reinforcement learning from human feedback to improve CDS alert logic and potentially other medical areas involving complex, clinical logic, a key step in the development of an advanced learning health system.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.