The conventional narrative states that the steadily rising incidence of melanoma among fair-skinned Caucasian populations during the last decades is caused by excessive UV-exposure. There is, however, no doubt that other factors had a significant impact on the rising incidence of melanoma. Pre-1980s the clinical diagnosis of melanoma was based on gross criteria such as ulceration or bleeding. Melanomas were often diagnosed in advanced stages when the prognosis was grim. In the mid-1980s education campaigns such as the propagation of the ABCD criteria, which addressed health care professionals and the public alike, shifted the focus towards early recognition. Dermatoscopy, which became increasingly popular in the mid-1990s, improved the accuracy for the diagnosis of melanoma in comparison to inspection with the unaided eye, especially for flat and small lesions lacking ABCD criteria. At the same time, pathologists began to lower their thresholds, particularly for the diagnosis of melanoma in situ. The melanoma epidemic that followed was mainly driven by an increase in the number of in situ or microinvasive melanomas. In a few decades, the landscape shifted from an undercalling to an overcalling of melanomas, a development that is now met with increased criticism. The gold standard of melanoma diagnosis is still conventional pathology, which is faced with low to moderate interobserver agreement. New insights in the molecular landscape of melanoma did not translate into techniques for the reliable diagnosis of gray zone lesions including small lesions. The aim of this review is to put our current view of melanoma diagnosis in historical context and to provide a narrative synthesis of its evolution. Based on this narrative I will provide suggestions on how to rebuild the trust in melanoma diagnosis accuracy and in the benefit of early recognition.