The luminosity determination for the ATLAS detector at the LHC during pp collisions at √ s = 8 TeV in 2012 is presented. The evaluation of the luminosity scale is performed using several luminometers, and comparisons between these luminosity detectors are made to assess the accuracy, consistency and long-term stability of the results. A luminosity uncertainty of δL/L = ±1.9% is obtained for the 22.7 fb −1 of pp collision data delivered to ATLAS at √ s = 8 TeV in 2012.
Bronchoscopic biopsy of the central-chest lymph nodes is an important step for lung-cancer staging. Before bronchoscopy, the physician first visually assesses a patient's three-dimensional (3D) computed tomography (CT) chest scan to identify suspect lymph-node sites. Next, during bronchoscopy, the physician guides the bronchoscope to each desired lymph-node site. Unfortunately, the physician has no link between the 3D CT image data and the live video stream provided during bronchoscopy. Thus, the physician must essentially perform biopsy blindly, and the skill levels between different physicians differ greatly. We describe an approach that enables synergistic fusion between the 3D CT data and the bronchoscopic video. Both the integrated planning and guidance system and the internal CT-video registration and fusion methods are described. Phantom, animal, and human studies illustrate the efficacy of the methods.
Bronchoscopy is a major step in lung cancer staging. To perform bronchoscopy, the physician uses a procedure plan, derived from a patient’s 3D computed-tomography (CT) chest scan, to navigate the bronchoscope through the lung airways. Unfortunately, physicians vary greatly in their ability to perform bronchoscopy. As a result, image-guided bronchoscopy systems, drawing upon the concept of CT-based virtual bronchoscopy (VB), have been proposed. These systems attempt to register the bronchoscope’s live position within the chest to a CT-based virtual chest space. Recent methods, which register the bronchoscopic video to CT-based endoluminal airway renderings, show promise but do not enable continuous real-time guidance. We present a CT-video registration method inspired by computer-vision innovations in the fields of image alignment and image-based rendering. In particular, motivated by the Lucas–Kanade algorithm, we propose an inverse-compositional framework built around a gradient-based optimization procedure. We next propose an implementation of the framework suitable for image-guided bronchoscopy. Laboratory tests, involving both single frames and continuous video sequences, demonstrate the robustness and accuracy of the method. Benchmark timing tests indicate that the method can run continuously at 300 frames/s, well beyond the real-time bronchoscopic video rate of 30 frames/s. This compares extremely favorably to the ≥1 s/frame speeds of other methods and indicates the method’s potential for real-time continuous registration. A human phantom study confirms the method’s efficacy for real-time guidance in a controlled setting, and, hence, points the way toward the first interactive CT-video registration approach for image-guided bronchoscopy. Along this line, we demonstrate the method’s efficacy in a complete guidance system by presenting a clinical study involving lung cancer patients.
The purpose of this model-based study was to determine the accuracy of placing dental implants using a new dynamic navigation system. This investigation focuses on measurements of overall accuracy for implant placement relative to the virtual plan in both dentate and edentulous models, and provides a comparison with a meta-analysis of values reported in the literature for comparable static guidance, dynamic guidance, and freehand placement studies. This study involves 1 surgeon experienced with dynamic navigation placing implants in models under clinical simulation using a dynamic navigation system (X-Guide, X-Nav Technologies, LLC, Lansdale, Pa) based on optical triangulation tracking. Virtual implants were placed into planned sites using the navigation system computer. Post-implant placement cone-beam scans were taken. These scans were mesh overlaid with the virtual plan and used to determine deviations from the virtual plan. The primary outcome variables were platform and angular deviations comparing the actual placement to the virtual plan. The angular accuracy of implants delivered using the tested device was 0.89° ± 0.35° for dentate case types and 1.26° ± 0.66° for edentulous case types, measured relative to the preoperative implant plan. Three-dimensional positional accuracy was 0.38 ± 0.21 mm for dentate and 0.56 ± 0.17 mm for edentulous, measured from the implant apex.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.