During embryogenesis, cells repeatedly divide and dynamically change their positions in three-dimensional (3D) space. A robust and accurate algorithm to acquire the 3D positions of the cells would help to reveal the mechanisms of embryogenesis. To acquire quantitative criteria of embryogenesis from time-series 3D microscopic images, image processing algorithms such as segmentation have been applied. Because the cells in embryos are considerably crowded, an algorithm to segment individual cells in detail and accurately is needed. To quantify the nuclear region of every cell from a time-series 3D fluorescence microscopic image of living cells, we developed QCANet, a convolutional neural network-based segmentation algorithm for 3D fluorescence bioimages. We demonstrated that QCANet outperformed 3D Mask R-CNN, which is currently considered as the best algorithm of instance segmentation. We showed that QCANet can be applied not only to developing mouse embryos but also to developing embryos of two other model species. Using QCANet, we were able to extract several quantitative criteria of embryogenesis from 11 early mouse embryos. We showed that the extracted criteria could be used to evaluate the differences between individual embryos. This study contributes to the development of fundamental approaches for assessing embryogenesis on the basis of extracted quantitative criteria.
With recent advances in supervised machine learning for medical image analysis applications, the annotated medical image datasets of various domains are being shared extensively. Given that the annotation labelling requires medical expertise, such labels should be applied to as many learning tasks as possible. However, the multi-modal nature of each annotated image renders it difficult to share the annotation label among diverse tasks. In this work, we provide an inductive transfer learning (ITL) approach to adopt the annotation label of the source domain datasets to tasks of the target domain datasets using Cycle-GAN based unsupervised domain adaptation (UDA). To evaluate the applicability of the ITL approach, we adopted the brain tissue annotation label on the source domain dataset of Magnetic Resonance Imaging (MRI) images to the task of brain tumor segmentation on the target domain dataset of MRI. The results confirm that the segmentation accuracy of brain tumor segmentation improved significantly. The proposed ITL approach can make significant contribution to the field of medical image analysis, as we develop a fundamental tool to improve and promote various tasks using medical images.Step 1: Brain Tissue SegmentationStep 2: Cycle-GAN based Unsupervised Domain AdaptationStep 3: Brain Tumor Segmentation with inducing Brain Tissue Label f s Source Domain Brain Tissue Label f s Target Domain Source Domain Brain Tissue Label f s f t Target Domain Source Domain Brain Tumor Label Brain Tissue Label ta l .s to monitor the training result are required. The purpose of enerate a fake x s as x t!s = G t!s (x t ). Since the purpose of to distinguish between the generated fake image x t!s and e generator and the discriminator are adversarially trained. ion in this relationship is defined:
Image-based deep learning systems, such as convolutional neural networks (CNNs), have recently been applied to cell classification, producing impressive results; however, application of CNNs has been confined to classification of the current cell state from the image. Here, we focused on dynamic cell movement where current and/or past cell shape can influence the future cell fate. We demonstrate that CNNs prospectively predicted the future direction of cell movement with high accuracy from a single image patch of a cell at a certain time. Furthermore, by visualizing the image features that were learned by the CNNs and contributed to their prediction, we could identify morphological features, e.g., the protrusions and trailing edge that have been experimentally reported to determine the direction of cell movement. Our results indicate that CNNs have the potential to predict the future cell fate from current cell shape, and can be used to automatically identify those morphological features that influence future cell fate.
Image-based deep learning systems, such as convolutional neural networks (CNNs), have recently been applied to cell classification, producing impressive results; however, application of CNNs has been confined to classification of the current cell state from the image. Here, we focused on cell movement where current and/or past cell shape can influence the future cell movement. We demonstrate that CNNs prospectively predicted the future direction of cell movement with high accuracy from a single image patch of a cell at a certain time. Furthermore, by visualizing the image features that were learned by the CNNs, we could identify morphological features, e.g., the protrusions and trailing edge that have been experimentally reported to determine the direction of cell movement. Our results indicate that CNNs have the potential to predict the future direction of cell movement from current cell shape, and can be used to automatically identify those morphological features that influence future cell movement.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.