The COVID-19 pandemic has placed an enormous strain on healthcare systems worldwide, leading to a need for more efficient methods of identifying the severity of COVID-19 patients to efficiently allocate resources. Existing Xray processing models for identification of COVID-19 are either highly complicated or showcase lower efficiency when applied for real-time scenarios. To overcome these issues, this paper presents a novel approach for identifying the severity of COVID-19 patients using an augmented multimodal X-ray feature representation model. The proposed model combines X-ray images, clinical data, and demographic information to create a robust representation of individual patient condition. The collected information is converted into multidomain feature sets, including frequency, Gabor, Wavelet and entropy components. A customized deep neural network is trained on this representation to predict the severity level of COVID-19 patients. To evaluate the performance of the proposed model, we used a dataset of X-ray images and clinical data from COVID-19 patients. Our results demonstrate that the proposed model outperforms existing methods for identifying COVID-19 severity levels, achieving an accuracy of 98.5% on multiple dataset samples. The proposed model's performance was observed to be promising in terms of precision, recall and delay, thus has the potential to aid in the early identification and effective management of severe COVID-19 cases, thus contributing to the global effort to combat the COVID-19 pandemic under clinical use cases.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.