The emerging field of geometric deep learning extends the application of convolutional neural networks to irregular domains such as graphs, meshes and surfaces. Several recent studies have explored the potential for using these techniques to analyse and segment the cortical surface. However, there has been no comprehensive comparison of these approaches to one another, nor to existing Euclidean methods, to date. This paper benchmarks a collection of geometric and traditional deep learning models on phenotype prediction and segmentation of sphericalised neonatal cortical surface data, from the publicly available Developing Human Connectome Project (dHCP). Tasks include prediction of postmenstrual age at scan, gestational age at birth and segmentation of the cortical surface into anatomical regions defined by the M-CRIB-S atlas. Performance was assessed not only in terms of model precision, but also in terms of network dependence on image registration, and model interpretation via occlusion. Networks were trained both on sphericalised and anatomical cortical meshes. Findings suggest that the utility of geometric deep learning over traditional deep learning is highly task-specific, which has implications for the design of future deep learning models on the cortical surface. The code, and instructions for data access, are available from https://github.com/Abdulah-Fawaz/Benchmarking-Surface-DL.
Studies of structural plasticity in the brain often require the detection and analysis of axonal synapses (boutons). To date, bouton detection has been largely manual or semi-automated, relying on a step that traces the axons before detection the boutons. If tracing the axon fails, the accuracy of bouton detection is compromised. In this paper, we propose a new algorithm that does not require tracing the axon to detect axonal boutons in 3D two-photon images taken from the mouse cortex. To find the most appropriate techniques for this task, we compared several well-known algorithms for interest point detection and feature descriptor generation. The final algorithm proposed has the following main steps: (1) a Laplacian of Gaussian (LoG) based feature enhancement module to accentuate the appearance of boutons; (2) a Speeded Up Robust Features (SURF) interest point detector to find candidate locations for feature extraction; (3) non-maximum suppression to eliminate candidates that were detected more than once in the same local region; (4) generation of feature descriptors based on Gabor filters; (5) a Support Vector Machine (SVM) classifier, trained on features from labelled data, and was used to distinguish between bouton and non-bouton candidates. We found that our method achieved a Recall of 95%, Precision of 76%, and F1 score of 84% within a new dataset that we make available for accessing bouton detection. On average, Recall and F1 score were significantly better than the current state-of-the-art method, while Precision was not significantly different. In conclusion, in this article we demonstrate that our approach, which is independent of axon tracing, can detect boutons to a high level of accuracy, and improves on the detection performance of existing approaches. The data and code (with an easy to use GUI) used in this article are available from open source repositories.
An important goal of medical imaging is to be able to precisely detect patterns of disease specific to individual scans; however, this is challenged in brain imaging by the degree of heterogeneity of shape and appearance. Traditional methods, based on image registration, historically fail to detect variable features of disease, as they utilise population-based analyses, suited
We assessed the pan-cancer predictability of multi-omic biomarkers from haematoxylin and eosin (H&E)-stained whole slide image (WSI) using deep learning and standard evaluation measures throughout a systematic study. A total of 13,443 deep learning (DL) models predicting 4,481 multi-omic biomarkers across 32 cancer types were trained and validated. The investigated biomarkers included genetic mutations, transcriptomic (mRNA) and proteomic under- and over-expression status, metabolomic pathways, established markers relevant for prognosis, including gene expression signatures, molecular subtypes, clinical outcomes and response to treatment. Overall, we established the general feasibility of predicting multi-omic markers across solid cancer types, where 50% of the models could predict biomarkers with the area under the curve (AUC) of more than 0.633 (with 25% of the models having AUC larger than 0.711). Aggregating across the omic types, our deep learning models achieved the following performance: mean AUC of 0.634 ±0.117 in predicting driver SNV mutations; 0.637 ±0.108 for over-/under-expression of transcriptomic genes; 0.666 ±0.108 for over-/under-expression of proteomes; 0.564 ±0.081 for metabolomic pathways; 0.653 ±0.097 for gene signatures and molecular subtypes; 0.742 ±0.120 for standard of care biomarkers; and 0.671 ±0.120 for clinical outcomes and treatment responses. The biomarkers were shown to be detectable from routine histology images across all investigated cancer types, with aggregate mean AUC exceeding 0.62 in almost all cancers. In addition, we observed that predictability is reproducible within-marker and less dependent on sample size and positivity ratio, indicating a degree of true predictability inherent to the biomarker itself.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.