Collecting and analyzing hyperspectral imagery (HSI) of plant roots over time can enhance our understanding of their function, responses to environmental factors, turnover, and relationship with the rhizosphere. Current belowground red-green-blue (RGB) root imaging studies infer such functions from physical properties like root length, volume, and surface area. HSI provides a more complete spectral perspective of plants by capturing a high-resolution spectral signature of plant parts, which have extended studies beyond physical properties to include physiological properties, chemical composition, and phytopathology. Understanding crop plants’ physical, physiological, and chemical properties enables researchers to determine high-yielding, drought-resilient genotypes that can withstand climate changes and sustain future population needs. However, most HSI plant studies use cameras positioned above ground, and thus, similar belowground advances are urgently needed. One reason for the sparsity of belowground HSI studies is that root features often have limited distinguishing reflectance intensities compared to surrounding soil, potentially rendering conventional image analysis methods ineffective. In the field of machine learning (ML), there are currently no publicly available datasets containing the heavy correlation, highly textured background, and thin features characteristic of belowground root systems. Here we present HyperPRI, a novel dataset containing RGB and HSI data for in situ, non-destructive, underground plant root analysis using ML tools. HyperPRI contains images of plant roots grown in rhizoboxes for two annual crop species – peanut (Arachis hypogaea) and sweet corn (Zea mays). Drought conditions are simulated once, and the boxes are imaged and weighed on select days across two months. Along with the images, we provide hand-labeled semantic masks and imaging environment metadata. Additionally, we present baselines for root segmentation on this dataset and draw comparisons between methods that focus on spatial, spectral, and spatial-spectral features to predict the pixel-wise labels. Results demonstrate that combining HyperPRI’s hyperspectral and spatial information improves semantic segmentation of target objects.