Acute myocardial infarction (AMI) is one of the most serious and dangerous cardiovascular diseases. In recent years, the number of patients around the world has been increasing significantly, among which people under the age of 45 have become the high-risk group for sudden death of AMI. AMI occurs quickly and does not show obvious symptoms before onset. In addition, postonset clinical testing is also a complex and invasive test, which may cause some postoperative complications. Therefore, it is necessary to propose a noninvasive and convenient auxiliary diagnostic method. In traditional Chinese medicine (TCM), it is an effective auxiliary diagnostic strategy to complete the disease diagnosis through some body surface features. It is helpful to observe whether the palmar thenar undergoes hypertrophy and whether the metacarpophalangeal joint is swelling in detecting acute myocardial infarction. Combined with deep learning, we propose a depth model based on traditional palm image (MTIALM), which can help doctors of traditional Chinese medicine to predict myocardial infarction. By building the shared network, the model learns information that covers all the tasks. In addition, task-specific attention branch networks are built to simultaneously detect the symptoms of different parts of the palm. The information interaction module (IIM) is proposed to further integrate the information between task branches to ensure that the model learns as many features as possible. Experimental results show that the accuracy of our model in the detection of metacarpophalangeal joints and palmar thenar is 83.16% and 84.15%, respectively, which are significantly improved compared with the traditional classification methods.
Purpose
Medical imaging data of lung cancer in different stages contain a large amount of time information related to its evolution (emergence, development, or extinction). We try to explore the evolution process of lung images in time dimension to improve the prediction of lung cancer survival by using longitudinal CT images and clinical data jointly.
Methods
In this paper, we propose an innovative multi-branch spatiotemporal residual network (MS-ResNet) for disease-specific survival (DSS) prediction by integrating the longitudinal computed tomography (CT) images at different times and clinical data. Specifically, we first extract the deep features from the multi-period CT images by an improved residual network. Then, the feature selection algorithm is used to select the most relevant feature subset from the clinical data. Finally, we integrate the deep features and feature subsets to take full advantage of the complementarity between the two types of data to generate the final prediction results.
Results
The experimental results demonstrate that our MS-ResNet model is superior to other methods, achieving a promising 86.78% accuracy in the classification of short-survivor, med-survivor, and long-survivor.
Conclusion
In computer-aided prognostic analysis of cancer, the time dimension features of the course of disease and the integration of patient clinical data and CT data can effectively improve the prediction accuracy.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.