The sparse matrix multiplication (SPGEMM) increased its importance in the last years due to its data science and machine learning applications. Consequently, considerable research has focused on accelerating this kernel in GPUs. Designing massively-parallel algorithms for the SPGEMM is a challenging task since the computation pattern is highly irregular, and the required memory and operations depend on the interaction between the nonzero layout of the inputs. One strategy to attack this kernel consists of proposing new sparse matrix storage formats that contribute to mitigating this irregularity. In previous work, we commenced a study of the recently proposed bmSparse matrix format, suggesting several modifications to the SPGEMM algorithm. This work integrates the previous extensions and proposes new improvements to unleash bmSparse's full potential before comparing it to more consolidated options.In particular, we enhance one of the most computationally demanding stages with an adaptive technique, apply optimizations to achieve more efficient data accesses, and analyze the effect of using Tensor Cores to accelerate the multiplication stage of the algorithm. The experimental results on a set of real-world sparse matrices show that the optimized implementation largely outperforms vendor implementations such as NVIDIA CUSPARSE Intel MKL-CSR variant, while being competitive with MKL's-BSR.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.