Although rapid advances in machine learning have made it increasingly applicable to expert decision-making, the delivery of accurate algorithmic predictions alone is insufficient for effective human-AI collaboration. In this work, we investigate the key types of information medical experts desire when they are first introduced to a diagnostic AI assistant. In a qualitative lab study, we interviewed 21 pathologists before, during, and after being presented deep neural network (DNN) predictions for prostate cancer diagnosis, to learn the types of information that they desired about the AI assistant. Our findings reveal that, far beyond understanding the local, case-specific reasoning behind any model decision, clinicians desired upfront information about basic, global properties of the model, such as its known strengths and limitations, its subjective point-of-view, and its overall design objective-what it's designed to be optimized for. Participants compared these information needs to the collaborative mental models they develop of their medical colleagues when seeking a second opinion: the medical perspectives and standards that those colleagues embody, and the compatibility of those perspectives with their own diagnostic patterns. These findings broaden and enrich discussions surrounding AI transparency for collaborative decision-making, providing a richer understanding of what experts find important in their introduction to AI assistants before integrating them into routine practice. CCS Concepts: • Human-centered computing → Human computer interaction (HCI).
Machin e lear nin g (ML) is incr easingly being use d in image retrieval systems for medical decision making. On e app lication of ML is to retrieve visually similar medical images from pas t patients (e.g. tissue from biops ies) to reference whe n making a medical decision with a new pat ient. Howeve r, no algorithm can perfectly captu re an expert ' s ideal notion of similarity for every case: an image th at is algorithmi cally determin ed to be similar may not be medically relevant to a doctor' s specific diagnostic needs. In this pape r, we identified the needs of patho logists whe n searchin g for similar images retrieved usin g a deep lear nin g algorithm , and develope d tools that empower use rs to cope with the search algorithm on-the -fly, communi cating what types of similarity are most import ant at different moment s in time. In two evaluations with path ologists, we found th at th ese refinement tools increased the diagnos tic utility of images found and increased user trus t in the algorithm. Th e tools we re preferred over a traditi onal interface, without a loss in diagnostic accuracy. We also observe d that users adopted new str ategies whe n using refinement tools, re-purpos ing th em to test and understand the underlying algorithm and to disambiguate ML errors from their own errors. Taken togethe r, these findings inform futur e hum an-ML collabo rative systems for expe rt decision-m aking. CCS CONCEPTS• Human-centered computing --> Human computer interaction (HCI); KEYWORDS Human -AI int eraction ; machin e learnin g; clinical healthPermission to mak e digital or har d copies of part or all of this work for personal or classroom use is grant ed without fee provi ded that copies are not made or distributed for profit or commercial advanta ge and that copies bear this notice an d the full citation on the first page. Figure 1: Medical images contain a wide range of clinical features , such as cellular (1) and glandular morphology (2), interaction between components (3), processing artifacts (4), and many more. It can be difficult for a similar -image search algorithm to perfectly capture an expert's notion of similarity ,
Saliency methods can aid understanding of deep neural networks. Recent years have witnessed many improvements to saliency methods, as well as new ways for evaluating them. In this paper, we 1) present a novel region-based attribution method, XRAI, that builds upon integrated gradients [26], 2) introduce evaluation methods for empirically assessing the quality of image-based saliency maps (Performance Information Curves (PICs)), and 3) contribute an axiom-based sanity check for attribution methods. Through empirical experiments and example results, we show that XRAI produces better results than other saliency methods for common models and the ImageNet dataset.
Manipulated images lose believability if the user's edits fail to account for shadows. We propose a method that makes removal and editing of soft shadows easy. Soft shadows are ubiquitous, but remain notoriously difficult to extract and manipulate. We posit that soft shadows can be segmented, and therefore edited, by learning a mapping function for image patches that generates shadow mattes. We validate this premise by removing soft shadows from photographs with only a small amount of user input.Given only broad user brush strokes that indicate the region to be processed, our new supervised regression algorithm automatically unshadows an image, removing the umbra and penumbra. The resulting lit image is frequently perceived as a believable shadow-free version of the scene. We tested the approach on a large set of soft shadow images, and performed a user study that compared our method to the state of the art and to real lit scenes. Our results are more difficult to identify as being altered, and are perceived as preferable compared to prior work.
The increasing availability of large institutional and public histopathology image datasets is enabling the searching of these datasets for diagnosis, research, and education. Although these datasets typically have associated metadata such as diagnosis or clinical notes, even carefully curated datasets rarely contain annotations of the location of regions of interest on each image. As pathology images are extremely large (up to 100,000 pixels in each dimension), further laborious visual search of each image may be needed to find the feature of interest. In this paper, we introduce a deep-learning-based reverse image search tool for histopathology images: Similar Medical Images Like Yours (SMILY). We assessed SMILY’s ability to retrieve search results in two ways: using pathologist-provided annotations, and via prospective studies where pathologists evaluated the quality of SMILY search results. As a negative control in the second evaluation, pathologists were blinded to whether search results were retrieved by SMILY or randomly. In both types of assessments, SMILY was able to retrieve search results with similar histologic features, organ site, and prostate cancer Gleason grade compared with the original query. SMILY may be a useful general-purpose tool in the pathologist’s arsenal, to improve the efficiency of searching large archives of histopathology images, without the need to develop and implement specific tools for each application.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.