We report on a controlled user study comparing three visualization environments for common 3D exploration. Our environments differ in how they exploit natural human perception and interaction capabilities. We compare an augmented-reality head-mounted display (Microsoft HoloLens), a handheld tablet, and a desktop setup. The novel head-mounted HoloLens display projects stereoscopic images of virtual content into a user's real world and allows for interaction in-situ at the spatial position of the 3D hologram. The tablet is able to interact with 3D content through touch, spatial positioning, and tangible markers, however, 3D content is still presented on a 2D surface. Our hypothesis is that visualization environments that match human perceptual and interaction capabilities better to the task at hand improve understanding of 3D visualizations. To better understand the space of display and interaction modalities in visualization environments, we first propose a classification based on three dimensions: perception, interaction, and the spatial and cognitive proximity of the two. Each technique in our study is located at a different position along these three dimensions. We asked 15 participants to perform four tasks, each task having different levels of difficulty for both spatial perception and degrees of freedom for interaction. Our results show that each of the tested environments is more effective for certain tasks, but that generally the desktop environment is still fastest and most precise in almost all cases.
Fig. 1.Our system is the first to enable neuroscientists to interactively explore petascale volume data resulting from high-throughput electron microscopy data streams. The volume can be visualized while the acquisition is still in progress, and without pre-processing all data into a 3D multi-resolution hierarchy as required by all previous systems. Shown here: 21, 494 ⇥ 25, 790 ⇥ 1, 850 mouse cortex.Abstract-This paper presents the first volume visualization system that scales to petascale volumes imaged as a continuous stream of high-resolution electron microscopy images. Our architecture scales to dense, anisotropic petascale volumes because it: (1) decouples construction of the 3D multi-resolution representation required for visualization from data acquisition, and (2) decouples sample access time during ray-casting from the size of the multi-resolution hierarchy. Our system is designed around a scalable multi-resolution virtual memory architecture that handles missing data naturally, does not pre-compute any 3D multi-resolution representation such as an octree, and can accept a constant stream of 2D image tiles from the microscopes. A novelty of our system design is that it is visualization-driven: we restrict most computations to the visible volume data. Leveraging the virtual memory architecture, missing data are detected during volume ray-casting as cache misses, which are propagated backwards for on-demand out-of-core processing. 3D blocks of volume data are only constructed from 2D microscope image tiles when they have actually been accessed during ray-casting. We extensively evaluate our system design choices with respect to scalability and performance, compare to previous best-of-breed systems, and illustrate the effectiveness of our system for real microscopy data from neuroscience.
Surgical approaches tailored to an individual patient's anatomy and pathology have become standard in neurosurgery. Precise preoperative planning of these procedures, however, is necessary to achieve an optimal therapeutic effect. Therefore, multiple radiological imaging modalities are used prior to surgery to delineate the patient's anatomy, neurological function, and metabolic processes. Developing a three-dimensional perception of the surgical approach, however, is traditionally still done by mentally fusing multiple modalities. Concurrent 3D visualization of these datasets can, therefore, improve the planning process significantly. In this paper we introduce an application for planning of individual neurosurgical approaches with high-quality interactive multimodal volume rendering. The application consists of three main modules which allow to (1) plan the optimal skin incision and opening of the skull tailored to the underlying pathology; (2) visualize superficial brain anatomy, function and metabolism; and (3) plan the patient-specific approach for surgery of deep-seated lesions. The visualization is based on direct multi-volume raycasting on graphics hardware, where multiple volumes from different modalities can be displayed concurrently at interactive frame rates. Graphics memory limitations are avoided by performing raycasting on bricked volumes. For preprocessing tasks such as registration or segmentation, the visualization modules are integrated into a larger framework, thus supporting the entire workflow of preoperative planning.
This paper presents ConnectomeExplorer, an application for the interactive exploration and query-guided visual analysis of large volumetric electron microscopy (EM) data sets in connectomics research. Our system incorporates a knowledge-based query algebra that supports the interactive specification of dynamically evaluated queries, which enable neuroscientists to pose and answer domain-specific questions in an intuitive manner. Queries are built step by step in a visual query builder, building more complex queries from combinations of simpler queries. Our application is based on a scalable volume visualization framework that scales to multiple volumes of several teravoxels each, enabling the concurrent visualization and querying of the original EM volume, additional segmentation volumes, neuronal connectivity, and additional meta data comprising a variety of neuronal data attributes. We evaluate our application on a data set of roughly one terabyte of EM data and 750 GB of segmentation data, containing over 4,000 segmented structures and 1,000 synapses. We demonstrate typical use-case scenarios of our collaborators in neuroscience, where our system has enabled them to answer specific scientific questions using interactive querying and analysis on the full-size data for the first time.
Fig. 1: Proofreading with Dojo.We present a web-based application for interactive proofreading of automatic segmentations of connectome data acquired via electron microscopy. Split, merge and adjust functionality enables multiple users to correct the labeling of neurons in a collaborative fashion. Color-coded structures can be explored in 2D and 3D.Abstract-Proofreading refers to the manual correction of automatic segmentations of image data. In connectomics, electron microscopy data is acquired at nanometer-scale resolution and results in very large image volumes of brain tissue that require fully automatic segmentation algorithms to identify cell boundaries. However, these algorithms require hundreds of corrections per cubic micron of tissue. Even though this task is time consuming, it is fairly easy for humans to perform corrections through splitting, merging, and adjusting segments during proofreading. In this paper we present the design and implementation of Mojo, a fully-featured singleuser desktop application for proofreading, and Dojo, a multi-user web-based application for collaborative proofreading. We evaluate the accuracy and speed of Mojo, Dojo, and Raveler, a proofreading tool from Janelia Farm, through a quantitative user study. We designed a between-subjects experiment and asked non-experts to proofread neurons in a publicly available connectomics dataset. Our results show a significant improvement of corrections using web-based Dojo, when given the same amount of time. In addition, all participants using Dojo reported better usability. We discuss our findings and provide an analysis of requirements for designing visual proofreading software.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.