Encoding sensor data into a map is a problem that must be undertaken by any robotic agent operating in unknown or uncertain environments, and real-time updates are crucial to safe planning and control. Most modern robotic sensors produce some form of depth data or point cloud information that is only useful to the agent after being processed into the appropriate data structure, oftentimes an occupancy map. However, as the quality of sensor technology improves, so does the magnitude of the input data, which can creates a problem when trying to construct occupancy maps in real-time. Populating such an occupancy map using these dense point clouds can quickly become an expensive process, and many robotic agents have limited onboard computational bandwidth and memory. This results in delayed map updates and reduced operational performance in dynamic environments where real-time information is crucial for safe operation. However, while many modern robotic agents are still relatively limited by the power of onboard central processing units (CPUs), many platforms are gaining access to onboard graphics processing units (GPUs), and these resources remain underutilised with respect to the problem of occupancy mapping. We propose a novel probabilistic mapping solution that leverages a combination of OpenVDB, NanoVDB, and Nvidia’s Compute Unified Device Architecture (CUDA) to encode dense point clouds into OpenVDB data structures, leveraging the parallel compute strength of GPUs to provide significant speed advantages and further free up resources for tasks that cannot as easily be performed in parallel. An evaluation of our solution is provided, with performance benchmarks provided for both a laptop and a low power single board computer with onboard GPU. Similar performance improvements should be accessible on any system with access to a CUDA-compatible GPU. Additionally, our library provides the means to simulate one or more sensors on an agent operating within a randomly generated 3D-grid environment and create a live map for the purposes of evaluating planning and control techniques and for training agents via deep reinforcement learning. We also provide interface packages for the Robotic Operating System (ROS1) and the Robotic Operating System 2 (ROS2), and a ROS2 visualisation (RVIZ2) plugin for the underlying OpenVDB data structure.