MPI neighborhood collectives were introduced in the MPI-3.0 standard to support sparse communication patterns used by many applications. Simultaneously, GPU-Aware MPI communication has become a prominent part of modern systems. With the rise of AMD GPUs and their incorporation into upcoming exascale systems like Frontier, it has become essential to optimize communication libraries for AMD platforms. In this paper, we take advantage of the hardware and networking features of AMD GPUs to design efficient and scalable neighborhood collective operations: allgather and allgatherv. We evaluate the performance of the proposed design for Random Sparse Graph and Moore neighborhood micro-benchmarks as well as an SpMM kernel. The results show that we obtain up to 7.03x speedup for the Random Sparse Graph micro-benchmark, up to 3.82x for the Moore neighborhood micro-benchmark, and up to 2.29x speedup for the SpMM kernel.