Recent advances in artificial intelligence (AI) and computer-aided decision support methods have produced various efficient ways to allow for learning about skin problems. 1 In particular, advances in machine learning have spurred novel retrieval algorithms and aroused interest in content-based image retrieval (CBIR) techniques, where computer vision methods are applied to search for similar images to a "query" image based on the content of the image and visual clues such as color, shape, and pattern, from large databases. 2 In the medical domain, CBIR is designed to assist with finding similar, labeled, medical images from a curated database. Within the dermatology context, CBIR can assist with diagnosis or education by comparing visually similar skin lesion images, 3 removing the difficulties that can arise when trying to describe images with words. Since the database and the algorithms for these systems are curated for a specific area or problem, users are less likely to encounter irrelevant images, one of the main problems with generic search engines. Despite the proposed benefits of modern CBIR systems, most CBIR-related research to date has focused on improving the accuracy of AI systems for diagnostic decisions 4,5 : we know little about the perceived utility and usability of CBIR systems for end users from a human-computer interaction (HCI) perspective. 6 In this paper, we describe a pilot study on how an interactive dermoscopic