Background The regulatory affairs (RA) division in a pharmaceutical establishment is the point of contact between regulatory authorities and pharmaceutical companies. They are delegated the crucial and strenuous task of extracting and summarizing relevant information in the most meticulous manner from various search systems. An artificial intelligence (AI)–based intelligent search system that can significantly bring down the manual efforts in the existing processes of the RA department while maintaining and improving the quality of final outcomes is desirable. We proposed a “frequently asked questions” component and its utility in an AI-based intelligent search system in this paper. The scenario is further complicated by the lack of publicly available relevant data sets in the RA domain to train the machine learning models that can facilitate cognitive search systems for regulatory authorities. Objective In this study, we aimed to use AI-based intelligent computational models to automatically recognize semantically similar question pairs in the RA domain and evaluate the Recognizing Question Entailment–based system. Methods We used transfer learning techniques and experimented with transformer-based models pretrained on corpora collected from different resources, such as Bidirectional Encoder Representations from Transformers (BERT), Clinical BERT, BioBERT, and BlueBERT. We used a manually labeled data set that contained 150 question pairs in the pharmaceutical regulatory domain to evaluate the performance of our model. Results The Clinical BERT model performed better than other domain-specific BERT-based models in identifying question similarity from the RA domain. The BERT model had the best ability to learn domain-specific knowledge with transfer learning, which reached the best performance when fine-tuned with sufficient clinical domain question pairs. The top-performing model achieved an accuracy of 90.66% on the test set. Conclusions This study demonstrates the possibility of using pretrained language models to recognize question similarity in the pharmaceutical regulatory domain. Transformer-based models that are pretrained on clinical notes perform better than models pretrained on biomedical text in recognizing the question’s semantic similarity in this domain. We also discuss the challenges of using data augmentation techniques to address the lack of relevant data in this domain. The results of our experiment indicated that increasing the number of training samples using back translation and entity replacement did not enhance the model’s performance. This lack of improvement may be attributed to the intricate and specialized nature of texts in the regulatory domain. Our work provides the foundation for further studies that apply state-of-the-art linguistic models to regulatory documents in the pharmaceutical industry.
BACKGROUND The regulatory affairs division in a pharmaceutical establishment is the point of contact between regulatory authorities and pharmaceutical companies. They are delegated to the crucial and strenuous task of extracting and summarizing relevant information in the most meticulous manner from various search systems. An AI-based intelligent search system that can significantly bring down the manual efforts in existing processes of the regulatory affairs department while maintaining/ improving the quality of final outcomes is desirable. We proposed a frequently asked questions (FAQ) component and its utility in an AI-based intelligent search system in this paper. The scenario is furthermore complicated by the lack of publicly available relevant datasets in the regulatory affairs domain to train the machine learning models that can facilitate cognitive search systems for regulatory authorities. OBJECTIVE This paper aims to use AI-based intelligent computational models to automatically detect similar questions in the regulatory affairs domain and evaluate the RQE system. METHODS We used the transfer learning techniques and experimented with transformer-based models pre-trained on corpora collected from different resources, like BERT, Clinical BERT, and BlueBERT. We used a manually labeled dataset that contained 150 question pairs in the pharmaceutical regulatory domain to evaluate our model’s performance. RESULTS The clinical BERT model performs better than other pre-trained BERT-based models in identifying question similarity from the regulatory affairs domain. The BERT model reaches superior performance when fine-tuned with enough clinical domain question pairs. The best model achieves an accuracy of 90.66% on the test set. CONCLUSIONS This work demonstrates the possibility of using pre-trained language models to recognize question similarity in the pharmaceutical regulatory domain. Transformer-based models pre-trained on clinical notes give a cut above performance than models pre-trained on biomedical text in recognizing question’s semantic similarity in this domain. We also discuss the challenges of using data augmentation techniques to tackle the issue of lack of relevant data in this domain. Our work is the foundation of further studies that apply state-of-the-art linguistic models to regulatory documents in the pharmaceutical industry.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.