PurposeThe aim of this study was to propose and evaluate a novel three-dimensional (3D) V-Net and two-dimensional (2D) U-Net mixed (VUMix-Net) architecture for a fully automatic and accurate gross tumor volume (GTV) in esophageal cancer (EC)–delineated contours.MethodsWe collected the computed tomography (CT) scans of 215 EC patients. 3D V-Net, 2D U-Net, and VUMix-Net were developed and further applied simultaneously to delineate GTVs. The Dice similarity coefficient (DSC) and 95th-percentile Hausdorff distance (95HD) were used as quantitative metrics to evaluate the performance of the three models in ECs from different segments. The CT data of 20 patients were randomly selected as the ground truth (GT) masks, and the corresponding delineation results were generated by artificial intelligence (AI). Score differences between the two groups (GT versus AI) and the evaluation consistency were compared.ResultsIn all patients, there was a significant difference in the 2D DSCs from U-Net, V-Net, and VUMix-Net (p=0.01). In addition, VUMix-Net showed achieved better 3D-DSC and 95HD values. There was a significant difference among the 3D-DSC (mean ± STD) and 95HD values for upper-, middle-, and lower-segment EC (p<0.001), and the middle EC values were the best. In middle-segment EC, VUMix-Net achieved the highest 2D-DSC values (p<0.001) and lowest 95HD values (p=0.044).ConclusionThe new model (VUMix-Net) showed certain advantages in delineating the GTVs of EC. Additionally, it can generate the GTVs of EC that meet clinical requirements and have the same quality as human-generated contours. The system demonstrated the best performance for the ECs of the middle segment.
Traditional Chinese medicine (TCM) prescriptions have made great contributions to the treatment of diseases and health preservation. To alleviate the shortage of TCM resources and improve the professionalism of automatically generated prescriptions, this paper deeply explores the connection between symptoms and herbs through deep learning technology, and realizes the automatic generation of TCM prescriptions. Particularly, this paper considers the significance of referring to similar underlying prescriptions as herbal candidates in the TCM prescribing process. Moreover, this paper incorporates the idea of referring to the potential guidance information of corresponding prescriptions when model extracts symptoms representations. To provide a reference for inexperienced young TCM doctors when they prescribe, this paper proposes a dual-branch guidance strategy combined with candidate attention model (DGSCAM) to automatically generate TCM prescriptions based on symptoms text. The format of the data used this paper is the “symptoms-prescription” data pair. The specific method is as follows. First, DGSCAM realizes the extraction of key information of prescription-guided symptoms through a dual-branch network. Then, herbal candidates in the form of prescriptions that can treat symptoms are proposed in view of the compatibility knowledge of TCM prescriptions. To our knowledge, this is the first attempt to use prescriptions as herbal candidates in the prescription generation process. We conduct extensive experiments on a mixed public and clinical dataset, achieving 37.39% precision, 25.04% recall, and 29.99% F1 score, with an average doctor score of 7.7 out of 10. The experimental results show that our proposed model is valid and can generate more specialized TCM prescriptions than the baseline models. The DGSCAM developed by us has broad application scenarios and greatly promotes the research on intelligent TCM prescribing.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.