The application of deep learning methods to raw electroencephalogram (EEG) data is growing increasingly common. While these methods offer the possibility of improved performance relative to other approaches applied to manually engineered features, they also present the problem of reduced explainability. As such, a number of studies have sought to provide explainability methods uniquely adapted to the domain of deep learning-based raw EEG classification. In this study, we present a taxonomy of those methods, identifying existing approaches that provide insight into spatial, spectral, and temporal features. We then present a novel framework consisting of a series of explainability approaches for insight into classifiers trained on raw EEG data. Our framework provides spatial, spectral, and temporal explanations similar to existing approaches. However, it also, to the best of our knowledge, proposes the first explainability approaches for insight into spatial and spatio-spectral interactions in EEG. This is particularly important given the frequent use and well-characterized importance of EEG connectivity measures for neurological and neuropsychiatric disorder analysis. We demonstrate our proposed framework within the context of automated major depressive disorder (MDD) diagnosis, training a high performing one-dimensional convolutional neural network with a robust cross-validation approach on a publicly available dataset. We identify interactions between central electrodes and other electrodes and identify differences in frontal theta, beta, and gamma low between healthy controls and individuals with MDD. Our study represents a significant step forward for the field of deep learning-based raw EEG classification, providing new capabilities in interaction explainability and providing direction for future innovations through our proposed taxonomy.