In the realm of information, conversational search is a relatively new trend. In this study, we have developed, implemented, and evaluated a multiview conversational image search system to investigate user search behaviour. We have also explored the potential for reinforcement learning to learn from user search behaviour and support the user in the complex information seeking process. A conversational image search system may mimic a natural language discussion with a user via text or speech, and then assist the user in locating the required picture via a dialogue-based search. We modified and improved a dual-view search interface that displays discussions on one side and photos on the other. Based on the states, incentives, and dialogues in the initial run, we developed a reinforcement learning model and a customized search algorithm in the back end that predicts which reply and images would be provided to the user among a restricted set of fixed responses. Usability of the system was validated using methodologies such as Chatbot Usability Questionnaire, System Usability Scale, and User Experience Questionnaire, and the values were tabulated. The result of this usability experiment proved that most of the users found the system to be very usable and helpful for their image search.