We present an improved stress majorization method that incorporates various constraints, including directional constraints without the necessity of solving a constraint optimization problem. This is achieved by reformulating the stress function to impose constraints on both the edge vectors and lengths instead of just on the edge lengths (node distances). This is a unified framework for both constrained and unconstrained graph visualizations, where we can model most existing layout constraints, as well as develop new ones such as the star shapes and cluster separation constraints within stress majorization. This improvement also allows us to parallelize computation with an efficient GPU conjugant gradient solver, which yields fast and stable solutions, even for large graphs. As a result, we allow the constraint-based exploration of large graphs with 10K nodes - an approach which previous methods cannot support.
Traditional fisheye views for exploring large graphs introduce substantial distortions that often lead to a decreased readability of paths and other interesting structures. To overcome these problems, we propose a framework for structure-aware fisheye views. Using edge orientations as constraints for graph layout optimization allows us not only to reduce spatial and temporal distortions during fisheye zooms, but also to improve the readability of the graph structure. Furthermore, the framework enables us to optimize fisheye lenses towards specific tasks and design a family of new lenses: polyfocal, cluster, and path lenses. A GPU implementation lets us process large graphs with up to 15,000 nodes at interactive rates. A comprehensive evaluation, a user study, and two case studies demonstrate that our structure-aware fisheye views improve layout readability and user performance.
Database and data structure research can improve machine learning performance in many ways. One way is to design better algorithms on data structures. This paper combines the use of incremental computation as well as sequential and probabilistic filtering to enable “forgetful” tree-based learning algorithms to cope with streaming data that suffers from concept drift. (Concept drift occurs when the functional mapping from input to classification changes over time). The forgetful algorithms described in this paper achieve high performance while maintaining high quality predictions on streaming data. Specifically, the algorithms are up to 24 times faster than state-of-the-art incremental algorithms with, at most, a 2% loss of accuracy, or are at least twice faster without any loss of accuracy. This makes such structures suitable for high volume streaming applications.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.