Artificial intelligence (AI) using a convolutional neural network (CNN) has demonstrated promising performance in radiological analysis. We aimed to develop and validate a CNN for the detection and diagnosis of focal liver lesions (FLLs) from ultrasonography (USG) still images. The CNN was developed with a supervised training method using 40,397 retrospectively collected images from 3,487 patients, including 20,432 FLLs (hepatocellular carcinomas (HCCs), cysts, hemangiomas, focal fatty sparing, and focal fatty infiltration). AI performance was evaluated using an internal test set of 6,191 images with 845 FLLs, then externally validated using 18,922 images with 1,195 FLLs from two additional hospitals. The internal evaluation yielded an overall detection rate, diagnostic sensitivity and specificity of 87.0% (95%CI: 84.3–89.6), 83.9% (95%CI: 80.3–87.4), and 97.1% (95%CI: 96.5–97.7), respectively. The CNN also performed consistently well on external validation cohorts, with a detection rate, diagnostic sensitivity and specificity of 75.0% (95%CI: 71.7–78.3), 84.9% (95%CI: 81.6–88.2), and 97.1% (95%CI: 96.5–97.6), respectively. For diagnosis of HCC, the CNN yielded sensitivity, specificity, and negative predictive value (NPV) of 73.6% (95%CI: 64.3–82.8), 97.8% (95%CI: 96.7–98.9), and 96.5% (95%CI: 95.0–97.9) on the internal test set; and 81.5% (95%CI: 74.2–88.8), 94.4% (95%CI: 92.8–96.0), and 97.4% (95%CI: 96.2–98.5) on the external validation set, respectively. CNN detected and diagnosed common FLLs in USG images with excellent specificity and NPV for HCC. Further development of an AI system for real-time detection and characterization of FLLs in USG is warranted.
One challenge in applying deep learning to medical imaging is the lack of labeled data. Although large amounts of clinical data are available, acquiring labeled image data is difficult, especially for bone scintigraphy (i.e., 2D bone imaging) images. Bone scintigraphy images are generally noisy, and ground-truth or gold standard information from surgical or pathological reports may not be available. We propose a novel neural network model that can segment abnormal hotspots and classify bone cancer metastases in the chest area in a semisupervised manner. Our proposed model, called MaligNet, is an instance segmentation model that incorporates ladder networks to harness both labeled and unlabeled data. Unlike deep learning segmentation models that classify each instance independently, MaligNet utilizes global information via an additional connection from the core network. To evaluate the performance of our model, we created a dataset for bone lesion instance segmentation using labeled and unlabeled example data from 544 and 9,280 patients, respectively. Our proposed model achieved mean precision, mean sensitivity, and mean F1-score of 0.852, 0.856, and 0.848, respectively, and outperformed the baseline mask region-based convolutional neural network (Mask R-CNN) by 3.92%. Further analysis showed that incorporating global information also helps the model classify specific instances that require information from other regions. On the metastasis classification task, our model achieves a sensitivity of 0.657 and a specificity of 0.857, demonstrating its great potential for automated diagnosis using bone scintigraphy in clinical practice.
Volatile organic compounds (VOCs) profile for diagnosis and monitoring therapeutic response of hepatocellular carcinoma (HCC) has not been well studied. We determined VOCs profile in exhaled breath of 97 HCC patients and 111 controls using gas chromatography–mass spectrometry and Support Vector Machine algorithm. The combination of acetone, 1,4-pentadiene, methylene chloride, benzene, phenol and allyl methyl sulfide provided the highest accuracy of 79.6%, with 76.5% sensitivity and 82.7% specificity in the training set; and 55.4% accuracy, 44.0% sensitivity, and 75.0% specificity in the test set. This combination was correlated with the HCC stages demonstrating by the increased distance from the classification boundary when the stage advanced. For early HCC detection, d-limonene provided a 62.8% sensitivity, 51.8% specificity and 54.9% accuracy. The levels of acetone, butane and dimethyl sulfide were significantly altered after treatment. Patients with complete response had a greater decreased acetone level than those with remaining tumor post-treatment (73.38 ± 56.76 vs. 17.11 ± 58.86 (× 106 AU, p = 0.006). Using a cutoff of 35.9 × 106 AU, the reduction in acetone level predicted treatment response with 77.3% sensitivity, 83.3% specificity, 79.4%, accuracy, and AUC of 0.784. This study demonstrates the feasibility of exhaled VOCs as a non-invasive tool for diagnosis, monitoring of HCC progression and treatment response.
Despite the wide availability of ultrasound machines for hepatocellular carcinoma surveillance, an inadequate number of expert radiologists performing ultrasounds in remote areas remains a primary barrier for surveillance. We demonstrated feasibility of artificial intelligence (AI) to aid in the detection of focal liver lesions (FLLs) during ultrasound. An AI system for FLL detection in ultrasound videos was developed. Data in this study were prospectively collected at a university hospital. We applied a two-step training strategy for developing the AI system by using a large collection of ultrasound snapshot images and frames from full-length ultrasound videos. Detection performance of the AI system was evaluated and then compared to detection performance by 25 physicians including 16 non-radiologist physicians and 9 radiologists. Our dataset contained 446 videos (273 videos with 387 FLLs and 173 videos without FLLs) from 334 patients. The videos yielded 172,035 frames with FLLs and 1,427,595 frames without FLLs for training on the AI system. The AI system achieved an overall detection rate of 89.8% (95%CI: 84.5–95.0) which was significantly higher than that achieved by non-radiologist physicians (29.1%, 95%CI: 21.2–37.0, p < 0.001) and radiologists (70.9%, 95%CI: 63.0–78.8, p < 0.001). Median false positive detection rate by the AI system was 0.7% (IQR: 1.3%). AI system operation speed reached 30–34 frames per second, showing real-time feasibility. A further study to demonstrate whether the AI system can assist operators during ultrasound examinations is warranted.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.