Key Points Question Can molecular markers of cancer be extracted from tissue morphology as seen in hematoxylin-eosin–stained images? Findings In this diagnostic study of tissue microarray hematoxylin-eosin–stained images from 5356 patients with breast cancer, molecular biomarker expression was found to be significantly associated with tissue histomorphology. A deep learning model was able to predict estrogen receptor expression solely from hematoxylin-eosin–stained images with noninferior accuracy to standard immunohistochemistry. Meaning These results suggest that deep learning models may assist pathologists in molecular profiling of cancer with practically no added cost and time.
Figure 1: Qualitative examples on FAUST models (left), SHREC'16 (middle) and SCAPE (right). In the SHREC experiment, the green parts mark where no correspondence was found. Notice how those areas are close to the parts that are hidden in the other model. The missing matches (marked in black) in the SCAPE experiment are an artifact due to the multiscale approach. AbstractWe present a method to match three dimensional shapes under non-isometric deformations, topology changes and partiality. We formulate the problem as matching between a set of pair-wise and point-wise descriptors, imposing a continuity prior on the mapping, and propose a projected descent optimization procedure inspired by difference of convex functions (DC) programming.
In this paper we propose DeROT, a method for in-plane derotation of depth images using a deep convolutional neural network. The method is aimed at normalizing out the effects of rotation on highly articulated motion of deforming geometric surfaces such as hands. To support our approach we also describe a new pipeline for building a very large training database using high accuracy magnetic annotation and labeling of objects imaged by a depth camera. he proposed method reduces the complexity of learning in the space of articulated poses which is demonstrated by using two different state-of-the-art learning based hand pose estimation methods applied to fingertip detection. Significant classification improvements are shown over the baseline implementation. Our framework involves no tracking, kinematic constraints or explicit prior model of the articulated object.DeROT: removing in-plane rotation Changing the global rotation of an object directly increases the variation in appearance of the object parts. For markerless situations, removing variability through partial canonization can significantly reduce the space of possible images used for pose learning instead of trying to explicitly learn the rotational variability through data augmentation. We therefore remove the variability as a preprocessing step during both a training phase and at run-time. To this end we propose to learn the rotation using a deep convolutional neural network (CNN) in a regression context based on a network similar to that of [4]. We show how this can be used to predict full three degrees of freedom (3 DOF) orientation information by training on a large database of hand images captured by a depth sensor. This is then combined with a useful insight which we call "Rule of thumb": there is almost always an in-plane rotation which can be applied to an image of the hand which forces the base of the thumb to be on the right side of the image. Synthetic and real examples of the results of applying DeROT to images of a hand can be seen in Figure 1. Fingertip detection. In this work we specifically focus on per frame fingertip detection in depth images without either tracking or kinematic modeling. We propose useful modifications to the popular machine learning based methods of Keskin et al. [3] and Tompson et al. [4]. Our preprocessing step then involves cropping input images of hands and rotating them about their center of mass using the predicted angle of derotation produced by DeROT.The calibrate between the camera and sensor frames we position the magnetic sensors on the corners of a checkerboard pattern to create physical correspondence between the detected corner locations and the actual sensors. The setup can be seen in Figure 2. Sensors are modeled as 3D oriented ellipsoids and ray-cast into the camera frame. Discrete fingertip labels as well as heat-maps and orientation information are then trivially associated with each input image. The database is created from 10 participants in total who perform random hand motions with extensive pose ...
In the past several decades, many attempts have been made to model synthetic realistic geometric data. The goal of such models is to generate plausible 3D geometries and textures. Perhaps the best known of its kind is the linear 3D morphable model (3DMM) for faces. Such models can be found at the core of many computer vision applications such as face reconstruction, recognition and authentication to name just a few. Generative adversarial networks (GANs) have shown great promise in imitating high dimensional data distributions. State of the art GANs are capable of performing tasks such as image to image translation as well as auditory and image signal synthesis, producing novel plausible samples from the data distribution at hand. Geometric data is generally more difficult to process due to the inherent lack of an intrinsic parametrization. By bringing geometric data into an aligned space, we are able to map the data onto a 2D plane using a universal parametrization. This alignment process allows for efficient processing of digitally scanned geometric data via image processing tools. Using this methodology, we propose a novel face synthesis model for generation of realistic facial textures together with their corresponding geometry. A GAN is employed in order to imitate the space of parametrized human textures, while corresponding facial geometries are generated by learning the best 3DMM coefficients for each texture. The generated textures are mapped back onto the corresponding geometries to obtain new generated high resolution 3D faces.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.