Hydrocarbon prospect risk assessment is an important process in oil and gas exploration involving the integrated analysis of various geophysical data modalities including seismic data, well logs, and geological information to estimate the likelihood of drilling success for a given drill location. Over the years, geophysicists have attempted to understand the various factors at play influencing the probability of success for hydrocarbon prospects. Towards this end, a large database of prospect drill outcomes and associated attributes has been collected and analyzed via correlation-based techniques to determine the features that contribute the most in deciding the final outcome. Machine learning has the potential to model complex feature interactions to learn input-output mappings for complicated, high-dimensional datasets. However, in many instances, machine learning models are not interpretable to end users, limiting their utility towards both understanding the underlying scientific principles for the problem domain as well in being deployed to assist in the risk assessment process. In this context, we leverage the concept of explainable machine learning to interpret various black-box machine learning models trained on the aforementioned prospect database for risk assessment. Using various case studies on real data, we demonstrate that this model-agnostic explainability analysis for prospect risking can (1) reveal novel scientific insights into the interplay of various features in regards to deciding prospect outcome, (2) assist with performing feature engineering for machine learning models, (3) detect bias in datasets involving spurious correlations, and (4) build a global picture of a model's understanding of the data by aggregating local explanations on individual data points.