Cochlea Implant (CI) planning is usually based on preoperative obtained CT or MRI data, visualising risk structures in the petrosal bone. In the past years, Digital Volume Tomography (DVT) with an increased spatial resolution and reduced radiation has become more important in the clinical routine for otology. In this work we propose an extension of our interactive “wizard”-guided approach for the interactive segmentation of the middle and inner ear structures for the use with DVT data. Different filter pipelines enable the user to interactive segment the acoustic canal, ossicles, tympanic cavity, facial nerve, chorda tympani, round window, cochlea and semicircular canals. The approach has been evaluated on six pre-operative acquired DVT datasets by an ENT expert. Results imply that the proposed method can handle DVT well and can potentially be used for interactive OR planning.
For the development, training, and validation of AIbased procedures, such as the analysis of clinical data, prediction of critical events, or planning of healthcare procedures, a lot of data is needed. In addition to this data of any origin (image data, bio-signals, health records, machine states, …) adequate supplementary information about the meaning encoded in the data is required. With this additional information - the semantic or knowledge - a tight relation between the raw data and the human-understandable concepts from the real world can be established. Nevertheless, as the amount of data needed to develop robust AI-based methods is strongly increasing, the assessment and acquisition of the related knowledge becomes more and more challenging. Within this work, an overview of currently available concepts of knowledge acquisition are described and evaluated. Four main groups of knowledge acquisition related to AI-based technologies have been identified. For image data mainly iconic annotation methods are used, where experienced users mark or draw depicted entities in the images and label them using predefined sets of classifications. Similarly, bio-signals are manually labelled, whereby important events along the timeline are marked. If no sufficient data is available, augmentation and simulation techniques are applied yielding data and semantics at the same time. In applications, where expensive sensors are replaced by low-cost devices, the high-grade data can be used as semantics. Finally, classic rule-based approaches are used, where human factual and procedural knowledge about the data and its context is translated into machine-understandable procedures. All these methods are depending on the involvement of human experts. To reduce this, more intelligent and hybrid approaches are needed, shifting the focus from the-human-in-the-loop to the-machine- in-the-loop.
For the image-based documentation of a colonoscopy procedure, a 3D-reconstuction of the hollow colon structure from endoscopic video streams is desirable. To obtain this reconstruction, 3D information about the colon has to be extracted from monocular colonoscopy image sequences. This information can be provided by estimating depth through shape-from-motion approaches, using the image information from two successive image frames and the exact knowledge of their disparity. Nevertheless, during a standard colonoscopy the spatial offset between successive frames is continuously changing. Thus, in this work deep convolutional neural networks (DCNNs) are applied in order to obtain piecewise depth maps and point clouds of the colon. These pieces can then be fused for a partial 3D reconstruction.
Zusammenfassung Hintergrund Operationen am Felsenbein stellen eine besondere Herausforderung für HNO-Chirurgen dar. Ziel des BMBF-geförderten Projektes war die Entwicklung eines realitätsnahen Trainingssystems für Ohroperationen in Form eines „Serious Game“. Methodik Der vorgestellte Prototyp des HaptiVisT-Systems fungiert als ohrchirurgisches Trainingssystem mit visueller Rückkopplung durch einen brillenlosen 3D-Monitor und haptischer Rückkopplung durch einen den Bohrer simulierenden Haptik-Arm. Eine Vielfalt von Trainingsmöglichkeiten wird durch 3 verfügbare chirurgische Prozeduren (Antrotromie, Mastoidektomie, posteriore Tympanotomie) gewährleistet. Ein gewichtetes Punktesystem ermöglicht die Messbarkeit des Trainingserfolgs. Im Zuge der technischen Entwicklung des Prototyps erfolgte eine prospektive Evaluation durch 8 HNO-Ärzte und 4 Studierende u.a. hinsichtlich der Lerninhalte und Benutzerfreundlichkeit. Zur Anwendung kam ein standardisierter Fragebogen (Ordinalskala: 1=sehr gut bis 5=sehr schlecht). Ergebnisse Hinsichtlich der Lerninhalte ergaben die Aspekte „Festigung Anatomie (Mittelwert=1,58)“, „Training Hand-Augen-Koordination (1,67)“, „Übertragbarkeit in die Praxis (1,83)“ und „Nützlichkeit für die Praxis (1,33)“ gute bis sehr gute Werte. Die Benutzerfreundlichkeit zeigte für die Aspekte „Realitätsnähe (2,29)“, „Zusammenspiel Haptik und Optik (2,33)“ sowie „Immersion in das Trainingssystem (1,89)“ ebenfalls gute Werte. Der „Motivationsfaktor“ war bei allen Testpersonen sehr hoch (1,2). Schlussfolgerung Das ohrchirurgische Trainingssystem HaptiVisT bietet die Möglichkeit des immersiven Trainings von Ohroperationen. Eine Integration in den klinischen Alltag und insbesondere in die ärztliche Weiterbildung zum HNO-Facharzt erscheint daher sinnvoll.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.