Conversational interfaces have recently become ubiquitous in the personal sphere by improving an individual's quality of life and industrial environments by automating services and their corresponding cost savings. However, designing the dialog model used by these interfaces to decide the following response is a hard-to-accomplish task for complex conversational interactions. This paper proposes a statistical-based dialog manager architecture, which provides flexibility to develop and maintain this module. Our proposal has been integrated using DialogFlow, a natural language understanding platform provided by Google to design conversational user interfaces. The proposed hybrid architecture has been assessed with a real use case for a train scheduling domain, proving that the user experience is highly valued and can be integrated into commercial setups.Conversational Interfaces (CUI) are systems that simulate interactive conversations with humans [1, 2, 3]. These systems are meant to display human-like characteristics and support the use of spontaneous natural language to interact with different purposes, such as performing transactions, responding to questions, or chatting.These interfaces have become a key research subject for many companies, as they have understood the potential revenue of introducing these devices into society's mainstream. Virtual Personal Assistants (VPAs), such as Google Now, Apple's Siri, Amazon's Alexa, or Microsoft's Cortana, are the most representative examples. These VPAs are used for a wide variety of tasks, from setting the alarm to updating the calendar, passing through getting directions, finding the nearest restaurants or stores, or even planning a recipe or reporting the news. CUIs are also used to make their services more engaging and increase customer satisfaction in various applications, such as making appointments and reservations, answering questions, e-government procedures, and support services.On the other hand, machine learning and deep learning methodologies have become part of many of the current intelligent systems [4,5,6]. These methodologies have traditionally been used in the field of conversational interfaces to develop automatic speech recognizers [7,8] and, more recently, to implement natural language understanding modules, natural language generators, and dialog management processes [4,9,10].