Variations in the shape and appearance of anatomical structures in medical images are often relevant radiological signs of disease. Automatic tools can help automate parts of this manual process. A cloud-based evaluation framework is presented in this paper including results of benchmarking current state-of-the-art medical imaging algorithms for anatomical structure segmentation and landmark detection: the VISCERAL Anatomy benchmarks. The algorithms are implemented in virtual machines in the cloud where participants can only access the training data and can be run privately by the benchmark administrators to objectively compare their performance in an unseen common test set. Overall, 120 computed tomography and magnetic resonance patient volumes were manually annotated to create a standard Gold Corpus containing a total of 1295 structures and 1760 landmarks. Ten participants contributed with automatic algorithms for the organ segmentation task, and three for the landmark localization task. Different algorithms obtained the best scores in the four available imaging modalities and for subsets of anatomical structures. The annotation framework, resulting data set, evaluation setup, results and performance analysis from the three VISCERAL Anatomy benchmarks are presented in this article. Both the VISCERAL data set and Silver Corpus generated with the fusion of the participant algorithms on a larger set of non-manually-annotated medical images are available to the research community.
Brain computer interface applications, developed for both healthy and clinical populations, critically depend on decoding brain activity in single trials. The goal of the present study was to detect distinctive spatiotemporal brain patterns within a set of event related responses. We introduce a novel classification algorithm, the spatially weighted FLD-PCA (SWFP), which is based on a two-step linear classification of event-related responses, using fisher linear discriminant (FLD) classifier and principal component analysis (PCA) for dimensionality reduction. As a benchmark algorithm, we consider the hierarchical discriminant component Analysis (HDCA), introduced by Parra, et al. 2007. We also consider a modified version of the HDCA, namely the hierarchical discriminant principal component analysis algorithm (HDPCA). We compare single-trial classification accuracies of all the three algorithms, each applied to detect target images within a rapid serial visual presentation (RSVP, 10 Hz) of images from five different object categories, based on single-trial brain responses. We find a systematic superiority of our classification algorithm in the tested paradigm. Additionally, HDPCA significantly increases classification accuracies compared to the HDCA. Finally, we show that presenting several repetitions of the same image exemplars improve accuracy, and thus may be important in cases where high accuracy is crucial.
Background Dental visits are unpleasant; sometimes, patients only seek treatment when they are in intolerable pain. Recently, the novel coronavirus (COVID-19) pandemic has highlighted the need for remote communication when patients and dentists cannot meet in person. Gingivitis is very common and characterized by red, swollen, bleeding gums. Gingivitis heals within 10 days of professional care and with daily, thorough oral hygiene practices. If left untreated, however, its progress may lead to teeth becoming mobile or lost. Of the many medical apps currently available, none monitor gingivitis. Objective This study aimed to present a characterization and development model of a mobile health (mHealth) app called iGAM, which focuses on periodontal health and improves the information flow between dentists and patients. Methods A focus group discussed the potential of an app to monitor gingivitis, and 3 semistructured in-depth interviews were conducted on the use of apps for monitoring gum infections. We used a qualitative design process based on the Agile approach, which incorporated the following 5 steps: (1) user story, (2) use cases, (3) functional requirements, (4) nonfunctional requirements, and (5) Agile software development cycles. In a pilot study with 18 participants aged 18-45 years and with different levels of health literacy, participants were given a toothbrush, toothpaste, mouthwash, toothpicks, and dental floss. After installing iGAM, they were asked to photograph their gums weekly for 4 weeks. Results All participants in the focus group believed in the potential of a mobile app to monitor gingivitis and reduce its severity. Concerns about security and privacy issues were discussed. From the interviews, 2 themes were derived: (1) “what's in it for me?” and (2) the need for a take-home message. The 5 cycles of development highlighted the importance of communication between dentists, app developers, and the pilot group. Qualitative analysis of the data from the pilot study showed difficulty with: (1) the camera, which was alleviated with the provision of mouth openers, and (2) the operation of the phone, which was alleviated by changing the app to be fully automated, with a weekly reminder and an instructions document. Final interviews showed satisfaction. Conclusions iGAM is the first mHealth app for monitoring gingivitis using self-photography. iGAM facilitates the information flow between dentists and patients between checkups and may be useful when face-to-face consultations are not possible (such as during the COVID-19 pandemic).
Background: Gum diseases are prevalent in a large proportion of the population worldwide. Unfortunately, most people do not follow a regular dental checkup schedule, and only seek treatment when experiencing acute pain. We aim to provide a system for classifying gum health status based on the MGI (Modified Gingival Index) score using dental selfies alone. Method: The input to our method is a manually cropped tooth image and the output is the MGI classification of gum health status. Our method consists of a cascade of two stages of robust, accurate, and highly optimized binary classifiers optimized per tooth position. Results: Dataset constructed from a pilot study of 44 participants taking dental selfies using our iGAM app. From each such dental selfie, eight single-tooth images were manually cropped, producing a total of 1520 images. The MGI score for each image was determined by a single examiner dentist. On a held-out test-set our method achieved an average AUC (Area Under the Curve) score of 95%. Conclusion: The paper presents a new method capable of accurately classifying gum health status based on the MGI score given a single dental selfie. Enabling personal monitoring of gum health—particularly useful when face-to-face consultations are not possible.
The increasing amount of medical imaging data acquired in clinical practice constitutes a vast database of untapped diagnostically relevant information. This paper presents a new hybrid approach to retrieving the most relevant medical cases based on textual and image information.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.