Abstract. Institutional websites of governments, public bodies and many other organizations must publish and keep updated a large amount of information stored in thousands of web pages in order to satisfy the demands of citizens. Nowadays, citizens require high levels of quality information from public institutions in order to guarantee their transparency. In this scenario, the "search form", which is typically available in websites, does not adequately support the users, since it requires the users to explicitly express their information needs through keywords. The so-called "long tail" phenomenon, typical observed in e-commerce portals, also affects this kind of websites: not all the pages can be adequately displayed with a high level of importance, and so users with no popular requests can spend a lot of time to locate the information they need. Therefore, users need support for improving the effectiveness of their navigation. In these scenarios, the development of a recommending system predicting the next page a user wants to see in a large website has gained importance. For addressing this issue, complex models and approaches for recommending web pages that usually require to process personal user preferences have been proposed.In this paper, we analyze and compare three different approaches to leverage information embedded in the structure of web sites and the logs of their web servers to improve the effectiveness of web page recommendation. Our proposals exploit the context of the users' navigations, i.e., their current sessions when surfing a specific web site. These approaches do not require either information about the personal preferences of the users to be stored and processed, or complex structures to be created and maintained. So, they can be easily incorporated to current large websites to facilitate the users' navigation experience. Finally, the paper reports some comparative experiments using a real-world website to analyze the performance of the proposed approaches.