Aims: Evaluating expression of the human epidermal growth factor receptor 2 (HER2) by visual examination of immunohistochemistry (IHC) on invasive breast cancer (BCa) is a key part of the diagnostic assessment of BCa due to its recognized importance as a predictive and prognostic marker in clinical practice. However, visual scoring of HER2 is subjective, and consequently prone to interobserver variability. Given the prognostic and therapeutic implications of HER2 scoring, a more objective method is required. In this paper, we report on a recent automated HER2 scoring contest, held in conjunction with the annual PathSoc meeting held in Nottingham in June 2016, aimed at systematically comparing and advancing the state-of-the-art artificial intelligence (AI)-based automated methods for HER2 scoring. Methods and results: The contest data set comprised digitized whole slide images (WSI) of sections from 86 cases of invasive breast carcinoma stained with both haematoxylin and eosin (H&E) and IHC for HER2. The contesting algorithms predicted scores of the IHC slides automatically for an unseen subset of the data set and the predicted scores were compared with the 'ground truth' (a consensus score from at least two experts). We also report on a simple 'Man versus Machine' contest for the scoring of HER2 and show Address for correspondence: N Rajpoot and T Qaiser, Department of Computer Science, University of Warwick, UK. e-mails: n.m.rajpoot@ warwick.ac.uk; t.qaiser@warwick.ac.uk *These authors contributed equally to this study. 2018 , 72, 227-238. DOI: 10.1111 that the automated methods could beat the pathology experts on this contest data set. Conclusions: This paper presents a benchmark for comparing the performance of automated algorithms for scoring of HER2. It also demonstrates the enormous potential of automated algorithms in assisting the pathologist with objective IHC scoring.
Deep artificial neural network learning is an emerging tool in image analysis. We demonstrate its potential in the field of digital holographic microscopy by addressing the challenging problem of determining the in-focus reconstruction depth of Madin-Darby canine kidney cell clusters encoded in digital holograms. A deep convolutional neural network learns the in-focus depths from half a million hologram amplitude images. The trained network correctly determines the in-focus depth of new holograms with high probability, without performing numerical propagation. This paper reports on extensions to preliminary work published earlier as one of the first applications of deep learning in the field of digital holographic microscopy.
Autofocusing of digital holograms of microscopic objects is a challenging problem. In this paper, an application of a deep learning in autofocusing is described. Its generalisation performance is analyzed.
In digital holographic microscopy, one often obtains an in-focus image of the sample by applying a focus metric to a stack of numerical reconstructions. We present an alternative approach using a deep convolutional neural network.
Digital holographic microscopy enables the capture of large three-dimensional volumes. Instead of using a laser as an illumination source, partially coherent alternatives can be used, such as light-emitting diodes, which produce parasitic reflection and speckle-free holograms. Captured high-contrast holograms are suitable for the characterization of micrometer-sized particles. As the reconstructed phase is not usable in the case of multiple overlapping objects, depth extraction can be conducted on a reconstructed intensity. This work introduces a novel depth extraction algorithm that takes into consideration the possible locations of multiple objects at various depths in the imaged volume. The focus metric, the Tamura coefficient, is applied for each pixel in the reconstructed amplitude throughout the volume. This work also introduces an optimized version of the algorithm, which is run in two stages. During the first stage, coarse positions of the objects are extracted by applying the Tamura coefficient to nonoverlapping window blocks of intensity reconstructions. The second stage produces high-precision characterizations of the objects by calculating the Tamura coefficient with overlapping window blocks around axial positions extracted in the first stage. Experimental results with real-world microscopic objects show the effectiveness of the proposed method.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.