T he manuscript titled ''AlphaGo, deep learning, and the future of the human microscopist'' in this month's issue of the Archives of Pathology & Laboratory Medicine 1 describes the triumph of Google's (Mountain View, California) artificial intelligence (AI) program, AlphaGo, which beat the 18-time world champion of Go, an ancient Chinese board game far more complex than chess.The authors have hypothesized that the development of intuition and creativity combined with the raw computing of AI heralds an age where well-designed and well-executed AI algorithms can solve complex medical problems, including the interpretation of diagnostic images, thereby replacing the microscopist. Of note, in a prior work, the microscope was predicted to have a 75% chance of remaining in use for another 144 years.
2To support their hypothesis, the authors presented recent studies that compared the performance of nontraditional interpreters to those of experienced pathologists, in making accurate diagnoses (note: 1 author disclosed a significant financial interest in an AI company). One study examined the potential of using pigeons (yes, pigeons) for medical image studies, 3 wherein the pigeons engaged in a matching game of completely benign and unambiguously malignant breast histology images. Pigeons correctly classified images as benign or malignant 85% of the time. A separate image algorithm study was erroneously reported to differentiate between small cell and non-small cell lung carcinoma with the accuracy of expert pulmonary pathologists, but instead, multiple computational algorithms were used to subtype known non-small cell lung carcinomas and gliomas in separate experiments. 4 The accuracy rate of each algorithm approached 70% to 85%. We believe that this level of diagnostic accuracy in settings that lack complexity is an extremely poor replica of a human pathologist's diagnostic capabilities.So, will the data-digesting and 24 3 7 learning AI be capable of looking at an image and able to render a pathologic diagnosis? Before attempting to answer this, we caution against the difficulties of predicting the future. Much of our existence still rests on innovations that have remained unchanged because of their inherent simplicity, applicability, and trueness to purpose (eg, the wheel), proving the point that something new (and different) is not always something better. On the other hand, several established and incumbent technologies were quickly (albeit incompletely) eclipsed, often within a decade, by a challenger that was faster, more convenient, cheaper, or better for the need (eg, postal mail being replaced by electronic mail). In the latter context, we note that information technology and AI are clearly better at repetitive detailed tasks that require accuracy and speed than are humans, who often find such tasks mind-numbing, and consequently are error-prone.