Understanding the biological functions of molecules in specific human tissues or cell types is crucial for gaining insights into human physiology and disease. To address this issue, it is essential to systematically uncover associations among multilevel elements consisting of disease phenotypes, tissues, cell types and molecules, which could pose a challenge because of their heterogeneity and incompleteness. To address this challenge, we describe a new methodological framework, called Graph Local InfoMax (GLIM), based on a human multilevel network (HMLN) that we established by introducing multiple tissues and cell types on top of molecular networks. GLIM can systematically mine the potential relationships between multilevel elements by embedding the features of the HMLN through contrastive learning. Our simulation results demonstrated that GLIM consistently outperforms other state-of-the-art algorithms in disease gene prediction. Moreover, GLIM was also successfully used to infer cell markers and rewire intercellular and molecular interactions in the context of specific tissues or diseases. As a typical case, the tissue-cell-molecule network underlying gastritis and gastric cancer was first uncovered by GLIM, providing systematic insights into the mechanism underlying the occurrence and development of gastric cancer. Overall, our constructed methodological framework has the potential to systematically uncover complex disease mechanisms and mine high-quality relationships among phenotypical, tissue, cellular and molecular elements.
Screening patients with precancerous lesions of gastric cancer (PLGC) is important for gastric cancer prevention. The accuracy and convenience of PLGC screening could be improved with the use of machine learning methodologies to uncover and integrate valuable characteristics of noninvasive medical images related to PLGC. In this study, we therefore focused on tongue images and for the first time constructed a tongue image-based PLGC screening deep learning model (AITongue). The AITongue model uncovered potential associations between tongue image characteristics and PLGC, and integrated canonical risk factors, including age, sex, and Hp infection. Five-fold cross validation analysis on an independent cohort of 1995 patients revealed the AITongue model could screen PLGC individuals with an AUC of 0.75, 10.3% higher than that of the model with only including canonical risk factors. Of note, we investigated the value of the AITongue model in predicting PLGC risk by establishing a prospective PLGC follow-up cohort, reaching an AUC of 0.71. In addition, we developed a smartphone-based app screening system to enhance the application convenience of the AITongue model in the natural population from high-risk areas of gastric cancer in China. Collectively, our study has demonstrated the value of tongue image characteristics in PLGC screening and risk prediction.
Compared with tongue diagnosis using tongue image analyzers, tongue diagnosis by smartphone has great advantages in convenience and cost for universal health monitoring, but its accuracy is affected by the shooting conditions of smartphones. Developing deep learning models with high accuracy and robustness to changes in the shooting environment for tongue diagnosis by smartphone and determining the influence of environmental changes on accuracy are necessary. In our study, a dataset of 9003 images was constructed after image pre-processing and labeling. Next, we developed an attention-based deep learning model (Deep Tongue) for 8 subtasks of tongue diagnosis, including the spotted tongue, teeth-marked tongue, and fissure tongue et al, which the average AUC of was 0.90, higher than the baseline model (ResNet50) by 0.10. Finally, we analyzed the objective reasons, the brightness of the environment and the hue of images, affecting the accuracy of tongue diagnosis by smartphone through a consistency experiment of direct subject inspection and tongue image inspection. Finally, we determined the influence of environmental changes on accuracy to quantify the robustness of the Deep Tongue model through simulation experiments. Overall, the Deep Tongue model achieved a higher and more stable classification accuracy of seven tongue diagnosis tasks in the complex shooting environment of the smartphone, and the classification of tongue coating (yellow/white) was found to be sensitive to the hue of the images and therefore unreliable without stricter shooting conditions and color correction.
Screening patients with precancerous lesions of gastric cancer (PLGC) is important for gastric cancer prevention. It could improve the accuracy and convenience of PLGC screening to uncover and integrate valuable characteristics of noninvasive medical images involving in PLGC, by applying machine learning methodologies. In this study, based on unbiasedly uncovering potential associations between tongue image characteristics and PLGC and integrating gastric cancer-related canonical risk factors, including age, sex, Hp infection, we focused on tongue images and constructed a tongue image-based PLGC screening deep learning model (AITongue). Then, validation analysis on an independent cohort of 1,995 patients revealed the AITongue model could screen PLGC individuals with an AUC of 0.75, 10.3% higher than that of the model constructed with gastric cancer-related canonical risk factors. Of note, we investigated the value of the AITongue model in predicting PLGC risk by establishing a prospective PLGC follow-up cohort, reaching an AUC of 0.71. In addition, we have developed a smartphone-based App screening system to enhance the application convenience of the AITongue model in the natural population. Collectively, our study has demonstrated the value of tongue image characteristics in PLGC screening and risk prediction.
BACKGROUND Gastroscopy is conducive to the early diagnosis of gastric cancer. It remains a key issue to screen premalignant patients who need gastroscopy in the clinic. Current screening strategies, including serum testing-based screening, are limited by high cost or invasive sampling, making them difficult to apply to large-scale natural populations. Therefore, a cost-effective and noninvasive auxiliary screening method that is suitable for large-scale application is urgently needed. OBJECTIVE The aim of this study was to construct a smartphone-based noninvasive auxiliary screening system suitable for screening patients with precancerous lesions of gastric cancer. Based on the auxiliary screening system, we expect to apply the concept of mobile health (mHealth) to establish a system to assist in screening natural populations at risk of gastric cancer and in need of gastroscopy. METHODS We developed the screening system by applying a naive Bayes classification algorithm based on collected questionnaires and gastritis medical records. We then established an affiliated app for application testing. The system was validated in three communities and we assessed the performance by comparison with other methods. RESULTS We constructed a “BIANQUE” screening system. First, we collected 841 questionnaires and 75,624 medical records. Second, we selected 9 risk factors in 20 factors. Third, we developed a screening system that achieved an AUC of 0.78 (95% CI [0.71,0.86]), comparable to blood testing-based screening methods (AUC=0.76). Fourth, we carried out a community validation. The odds ratio (OR) of different degrees of risk and gastric precancerous lesions was 2.85. CONCLUSIONS We have established an auxiliary screening system to help predict who needs gastroscopy. This system can achieve noninvasive and cost-effective testing with comparable performance to current invasive screening strategies. Thus, we speculate that this system could be easily applied on large-scale natural populations. CLINICALTRIAL Chinese clinical trial registry ChiCTR2100044006 http://www.chictr.org.cn/
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.