The anorexigenic peptide glucagon-like peptide-1 (GLP-1) is secreted from gut enteroendocrine cells and brain preproglucagon (PPG) neurons, which respectively define the peripheral and central GLP-1 systems. PPG neurons in the nucleus tractus solitarii (NTS) are widely assumed to link the peripheral and central GLP-1 systems in a unified gut-brain satiation circuit. However, direct evidence for this hypothesis is lacking, and the necessary circuitry remains to be demonstrated. Here we show that PPG
NTS
neurons encode satiation in mice, consistent with vagal signalling of gastrointestinal distension. However, PPG
NTS
neurons predominantly receive vagal input from oxytocin receptor-expressing vagal neurons, rather than those expressing GLP-1 receptors. PPG
NTS
neurons are not necessary for eating suppression by GLP-1 receptor agonists, and concurrent PPG
NTS
neuron activation suppresses eating more potently than semaglutide alone. We conclude that central and peripheral GLP-1 systems suppress eating via independent gut-brain circuits, providing a rationale for pharmacological activation of PPG
NTS
neurons in combination with GLP-1 receptor agonists as an obesity treatment strategy.
Objectives:
To compare detection patterns of 80 cephalometric landmarks identified by an automated identification system (AI) based on a recently proposed deep-learning method, the You-Only-Look-Once version 3 (YOLOv3), with those identified by human examiners.
Materials and Methods:
The YOLOv3 algorithm was implemented with custom modifications and trained on 1028 cephalograms. A total of 80 landmarks comprising two vertical reference points and 46 hard tissue and 32 soft tissue landmarks were identified. On the 283 test images, the same 80 landmarks were identified by AI and human examiners twice. Statistical analyses were conducted to detect whether any significant differences between AI and human examiners existed. Influence of image factors on those differences was also investigated.
Results:
Upon repeated trials, AI always detected identical positions on each landmark, while the human intraexaminer variability of repeated manual detections demonstrated a detection error of 0.97 ± 1.03 mm. The mean detection error between AI and human was 1.46 ± 2.97 mm. The mean difference between human examiners was 1.50 ± 1.48 mm. In general, comparisons in the detection errors between AI and human examiners were less than 0.9 mm, which did not seem to be clinically significant.
Conclusions:
AI showed as accurate an identification of cephalometric landmarks as did human examiners. AI might be a viable option for repeatedly identifying multiple cephalometric landmarks.
Objective:
To compare the accuracy and computational efficiency of two of the latest deep-learning algorithms for automatic identification of cephalometric landmarks.
Materials and Methods:
A total of 1028 cephalometric radiographic images were selected as learning data that trained You-Only-Look-Once version 3 (YOLOv3) and Single Shot Multibox Detector (SSD) methods. The number of target labeling was 80 landmarks. After the deep-learning process, the algorithms were tested using a new test data set composed of 283 images. Accuracy was determined by measuring the point-to-point error and success detection rate and was visualized by drawing scattergrams. The computational time of both algorithms was also recorded.
Results:
The YOLOv3 algorithm outperformed SSD in accuracy for 38 of 80 landmarks. The other 42 of 80 landmarks did not show a statistically significant difference between YOLOv3 and SSD. Error plots of YOLOv3 showed not only a smaller error range but also a more isotropic tendency. The mean computational time spent per image was 0.05 seconds and 2.89 seconds for YOLOv3 and SSD, respectively. YOLOv3 showed approximately 5% higher accuracy compared with the top benchmarks in the literature.
Conclusions:
Between the two latest deep-learning methods applied, YOLOv3 seemed to be more promising as a fully automated cephalometric landmark identification system for use in clinical practice.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.