Designing a safe decision-making system for end-toend urban driving is still challenging. Numerous contributions based on Deep Reinforcement Learning (DRL) were developed. However, they all suffer from the cold start issue and require extensive convergence training. Recent solutions for urban driving have emerged based on both Hierarchical Reinforcement Learning (HRL) and imitation learning to overcome these limitations. Nevertheless, they do not guarantee a safe exploration for an autonomous vehicle. In the literature, rule-based systems played a pivotal role in ensuring the safety of self-driving cars, but they require manual rule encoding. This paper introduces GHRL, a decision-making framework for vision-based urban driving that benefits from HRL, and a rule-based system for safe urban driving. The HRL algorithm learns the high-level policies, whereas the low-level policies are guided by the expert demonstration rules modeled with the Answer Set Programming (ASP) formalism. When a critical situation occurs, the system will shift to rely on ASP rules. The state of each policy includes visual features extracted by a convolutional neural network from a monocular camera, information on localization, and waypoints. GHRL is evaluated on the Carla NoCrash benchmark. The results show that by incorporating logical rules, GHRL achieved better performance over state-of-the-art HRL algorithms.