This research proposed a parallelized approach to scaling up the calculation of inundation height, the minimum sea-level rise required to inundate a cell on a digital elevation model, which is based on Dijkstra's algorithm for shortest-path calculations on a graph. Our approach is based on the concepts of spatial decomposition, calculate-and-correct, and a master/worker parallelization paradigm.The approach was tested using the U.S. Coastal Relief Model (CRM) dataset from the National Geophysical Data Center on a multicore desktop computer and various supercomputing resources through the U.S. Extreme Science and Engineering Discovery Environment (XSEDE) program. Our parallel implementation not only enables computations that were larger than previously possible, but also significantly outperforms serial implementations with respect to running time and memory footprint as the number of processing cores increases. The efficiency of the scalability seemed to be tied to tile size and flattened out at a certain number of workers.During the 20th century, world sea levels rose by 0.17 6 0.05 m (IPCC, 2007). The Intergovernmental Panel on Climate Change (IPCC) estimates that the rate of sea-level rise will roughly double over the next century due to increasing global temperatures, with a conservative projection of global sea-level rise of 0.18-0.59 m by 2100 (IPCC, 2007).Coastal inundation could have significant impacts, as nearly a quarter of the world's population lives at lower than 100 m and within 100 km of the coast (Nicholls et al., 2011). It is critical to know at what sea-level height and which coastal areas might be inundated in order to predict and mitigate economic and environmental impacts. Using Dijkstra's algorithm, Li et al. (2014) calculated inundation height (the minimum sea-level rise required to inundate a cell) on a raster that had approximately 46 million cells and took almost two hours. The calculations were for one tile from the National Geophysical Data Center (NGDC) Coastal Relief Model (CRM) dataset that has 537 one-degree by onedegree tiles (NOAA National Geophysical Data Center, 2014). Extrapolating for the entire dataset suggests that the amount of time and memory needed for the existing approach would not be feasible without specialized, highmemory, hardware. Even if a machine were able to handle the large data size, the running time required to perform the at 1.4 GHz in a single socket. Each node has 96 GB of DDR4 memory plus 16 GB high-speed MCDRAM and includes approximately 100 GB of SSD storage locally. There are three shared Lustre file systems available for each node, two with quotas of 10 GB and 1 TB per user and the third with approximately 30 PB of aggregate storage (TACC, 2018).Wrangler-TACC: Wrangler-TACC nodes are Dell R730 servers with two Intel Haswell E5-2680-v3 CPUs with 12 cores each running at 2.5 GHz, 128 GB of DDR4 memory, and 146 GB of local storage for the operating system. Each node has a 10 PB Lustre file system and 0.5 PB of shared flash storage high-performance para...