Archive ouverte UNIGEhttps://archive-ouverte.unige.ch Chapitre d'actes 2022 Accepted version Open Access This is an author manuscript post-peer-reviewing (accepted version) of the original publication. The layout of the published version may differ .
BACKGROUND
The presence of widespread misinformation in Web resources and the limited quality control provided by search engines can lead to serious implications for individuals seeking health advice.
OBJECTIVE
We aimed to investigate a multi-dimensional information quality assessment model based on deep learning to enhance the reliability of online healthcare information search results.
METHODS
In this retrospective study, we simulated online health information search scenarios with a topic set of 35 different health-related inquiries and a corpus containing one billion Web documents from the April 2019 snapshot of Common Crawl. Using state-of-the-art pre-trained language models, we inferred the usefulness, supportiveness, and credibility quality dimensions of the retrieved documents for a given search query.
RESULTS
The usefulness model provided the largest distinction between help and harm compatibility documents with a difference of 0.053. The supportiveness model achieved the best harm compatibility (0.024), while the combination of usefulness, supportiveness, and credibility models achieved the best help and harm compatibility on helpful topics.
CONCLUSIONS
Our results suggest that integrating automatic ranking models created for specific information quality dimensions can increase the effectiveness of health-related information retrieval for decision-making.
This paper presents the results of the Data Science for Digital Health (DS4DH) group in the MEDIQA-Chat Tasks at ACL-ClinicalNLP 2023. Our study combines the power of a classical machine learning method, Support Vector Machine, for classifying medical dialogues, along with the implementation of one-shot prompts using GPT-3.5. We employ dialogues and summaries from the same category as prompts to generate summaries for novel dialogues. Our findings exceed the average benchmark score, offering a robust reference for assessing performance in this field.
Background: The presence of widespread misinformation in Web resources and the limited quality control provided by search engines can lead to serious implications for individuals seeking health advice. Objective: We aimed to investigate a multi-dimensional information quality assessment model based on deep learning to enhance the reliability of online healthcare information search results. Methods: In this retrospective study, we simulated online health information search scenarios with a topic set of 35 different health-related inquiries and a corpus containing one billion Web documents from the April 2019 snapshot of Common Crawl. Using state-of-the-art pre-trained language models, we inferred the usefulness, supportiveness, and credibility quality dimensions of the retrieved documents for a given search query. Results: The usefulness model provided the largest distinction between help and harm compatibility documents with a difference of 0.053. The supportiveness model achieved the best harm compatibility (0.024), while the combination of usefulness, supportiveness, and credibility models achieved the best help and harm compatibility on helpful topics. Conclusions: Our results suggest that integrating automatic ranking models created for specific information quality dimensions can increase the effectiveness of health-related information retrieval for decision-making.
This paper presents the results of the Data Science for Digital Health (DS4DH) group in the MEDIQA-Chat Tasks at ACL-ClinicalNLP 2023. Our study combines the power of a classical machine learning method, Support Vector Machine, for classifying medical dialogues, along with the implementation of oneshot prompts using GPT-3.5. We employ dialogues and summaries from the same category as prompts to generate summaries for novel dialogues. Our findings exceed the average benchmark score, offering a robust reference for assessing performance in this field.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.