Several approaches have been proposed to remedy the dimensionality problem that reinforcement learning (RL) suffers from. Among the solutions, hierarchical reinforcement learning (HRL) consists of dividing an RL problem into sub-problems called options or abstract actions. Discovering abstract actions or options for HRL is challenging, multiple approaches are proposed. In this paper, we present a new approach, an agent with direction sense for an automatic option discovery. Our agent uses its direction sense to discover shortcuts and shortest paths between states that he has already visited, he detects bottlenecks for building termination conditions and initiation states for options. Thus, at the learning step, the agent uses his previous experience of exploration in parallel with an intrinsically motivated learning. The options discovered are task-independent and could be used for new tasks. Experimental results on maze problems and Tic-tac-toe game indicate better results compared with flat RL and another RL approach in general and special cases.