Pipeline architectures provide a versatile and efficient mechanism for constructing visualizations, and they have been implemented in numerous libraries and applications over the past two decades. In addition to allowing developers and users to freely combine algorithms, visualization pipelines have proven to work well when streaming data and scale well on parallel distributed-memory computers. However, current pipeline visualization frameworks have a critical flaw: they are unable to manage time varying data. As data flows through the pipeline, each algorithm has access to only a single snapshot in time of the data. This prevents the implementation of algorithms that do any temporal processing such as particle tracing; plotting over time; or interpolation, fitting, or smoothing of time series data. As data acquisition technology improves, as simulation time-integration techniques become more complex, and as simulations save less frequently and regularly, the ability to analyze the time-behavior of data becomes more important. This paper describes a modification to the traditional pipeline architecture that allows it to accommodate temporal algorithms. Furthermore, the architecture allows temporal algorithms to be used in conjunction with algorithms expecting a single time snapshot, thus simplifying software design and allowing adoption into existing pipeline frameworks. Our architecture also continues to work well in parallel distributed-memory environments. We demonstrate our architecture by modifying the popular VTK framework and exposing the functionality to the ParaView application. We use this framework to apply time-dependent algorithms on large data with a parallel cluster computer and thereby exercise a functionality that previously did not exist.
As the number of cores in processors increase and accelerator architectures are becoming more common, an ever greater number of threads is required to achieve full processor utilization. Our current parallel scientific visualization codes rely on partitioning data to achieve parallel processing, but this approach will not scale as we approach massive threading in which work is distributed in such a fine level that each thread is responsible for a minute portion of data. In this paper we characterize the challenges of refactoring our current visualization algorithms by considering the finest portion of work each performs and examining the domain of input data, overlaps of output domains, and interdependencies among work instances. We divide our visualization algorithms into eight categories, each containing algorithms with the same interdependencies. By focusing our research efforts to solving these categorial challenges rather than this legion of individual algorithms, we can make attainable advancement for extreme computing.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.