Context-adaptive navigation is currently considered as one of the potential solutions to achieve a more precise and robust positioning. The goal would be to adapt the sensor parameters and the navigation filter structure so that it takes into account the context-dependant sensor performance, notably GNSS signal degradations. For that, a reliable context detection is essential. This paper proposes a GNSS-based environmental context detector which classifies the environment surrounding a vehicle into four classes: canyon, open-sky, trees and urban. A support-vector machine classifier is trained on our database collected around Toulouse. We first show the classification results of a model based on GNSS data only, revealing its limitation to distinguish trees and urban contexts. For addressing this issue, this paper proposes the vision-enhanced model by adding satellite visibility information from sky segmentation on fisheye camera images. Compared to the GNSS-only model, the proposed vision-enhanced model significantly improved the classification performance and raised an average F1-score from 78% to 86%.