Grading hydronephrosis severity relies on subjective interpretation of renal ultrasound images. Deep learning is a data-driven algorithmic approach to classifying data, including images, presenting a promising option for grading hydronephrosis. The current study explored the potential of deep convolutional neural networks (CNN), a type of deep learning algorithm, to grade hydronephrosis ultrasound images according to the 5-point Society for Fetal Urology (SFU) classification system, and discusses its potential applications in developing decision and teaching aids for clinical practice. We developed a five-layer CNN to grade 2,420 sagittal hydronephrosis ultrasound images [191 SFU 0 (8%), 407 SFU I (17%), 666 SFU II (28%), 833 SFU III (34%), and 323 SFU IV (13%)], from 673 patients ranging from 0 to 116.29 months old (M age = 16.53, SD = 17.80). Five-way (all grades) and two-way classification problems [i.e., II vs. III, and low (0-II) vs. high (III-IV)] were explored. The CNN classified 94% (95% CI, 93-95%) of the images correctly or within one grade of the provided label in the five-way classification problem. Fifty-one percent of these images (95% CI, 49-53%) were correctly predicted, with an average weighted F1 score of 0.49 (95% CI, 0.47-0.51). The CNN achieved an average accuracy of 78% (95% CI, 75-82%) with an average weighted F1 of 0.78 (95% CI, 0.74-0.82) when classifying low vs. high grades, and an average accuracy of 71% (95% CI, 68-74%) with an average weighted F1 score of 0.71 (95% CI, 0.68-0.75) when discriminating between grades II vs. III. Our model performs well above chance level, and classifies almost all images either correctly or within one grade of the provided label. We have demonstrated the applicability of a CNN approach to hydronephrosis ultrasound image classification. Further investigation into a deep learning-based clinical adjunct for hydronephrosis is warranted.
In daily tasks, we are often confronted with competing potential targets and must select one to act on. It has been suggested that, prior to target selection, the human brain encodes the motor goals of multiple, potential targets. However, this view remains controversial and it has been argued that only a single motor goal is encoded, or that motor goals are only specified after target selection. To investigate this issue, we measured participants’ gaze behaviour while viewing two potential reach targets, one of which was cued after a preview period. We applied visuomotor rotations to dissociate each visual target location from its corresponding motor goal location; i.e., the location participants needed to aim their hand toward to bring the rotated cursor to the target. During the preview period, participants most often fixated both motor goals but also frequently fixated one, or neither, motor goal location. Further gaze analysis revealed that on trials in which both motor goals were fixated, both locations were held in memory simultaneously. These findings show that, at the level of single trials, the brain most often encodes multiple motor goals prior to target selection, but may also encode either one or no motor goals. This result may help reconcile a key debate concerning the specification of motor goals in cases of target uncertainty.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.