Motivation Single-cell time-lapse microscopy is a ubiquitous tool for studying the dynamics of complex cellular processes. While imaging can be automated to generate very large volumes of data, the processing of the resulting movies to extract high-quality single-cell information remains a challenging task. The development of software tools that automatically identify and track cells is essential for realizing the full potential of time-lapse microscopy data. Convolutional neural networks (CNNs) are ideally suited for such applications, but require great amounts of manually annotated data for training, a time-consuming and tedious process. Results We developed a new approach to CNN training for yeast cell segmentation based on synthetic data, and present i) a software tool for the generation of synthetic images mimicking brightfield images of budding yeast cells and ii) a convolutional neural network (Mask-RCNN) for yeast segmentation that was trained on a fully synthetic dataset. The Mask-RCNN performed excellently on segmenting actual microscopy images of budding yeast cells, and a density-based clustering algorithm (DBSCAN) was able to track the detected cells across the frames of microscopy movies. Our synthetic data creation tool completely bypassed the laborious generation of manually annotated training datasets, and can be easily adjusted to produce images with many different features. The incorporation of synthetic data creation into the development pipeline of CNN-based tools for budding yeast microscopy is a critical step towards the generation of more powerful, widely applicable and user-friendly image processing tools for this microorganism. Availability The synthetic data generation code can be found at https://github.com/prhbrt/synthetic-yeast-cells. The Mask R-CNN, as well as the tuning and benchmarking scripts can be found at https://github.com/ymzayek/yeastcells-detection-maskrcnn We also provide Google Colab scripts that reproduce all the results of this work. Supplementary information Supplementary material is available at Bioinformatics online.
The structural similarity index metric is used to measure the similarity between two images.The aim here was to study the feasibility of this metric to measure the structural similarity and fracture characteristics of midfacial fractures in computed tomography (CT) datasets following radiation dose reduction, iterative reconstruction (IR) and deep learning reconstruction. Zygomaticomaxillary fractures were inflicted on four human cadaver specimen and scanned with standard and low dose CT protocols.Datasets were reconstructed using varying strengths of IR and the subsequently applying the PixelShine™ deep learning algorithm as post processing. Individual small and non-dislocated fractures were selected for the data analysis. After attenuating the osseous anatomy of interest, registration was performed to superimpose the datasets and subsequently to measure by structural image quality. Changes to the fracture characteristics were measured by comparing each fracture to the mirrored contralateral anatomy. Twelve fracture locations were included in the data analysis. The most structural image quality changes occurred with radiation dose reduction (0.980036±0.011904), whilst the effects of IR strength (0.995399±0.001059) and the deep learning algorithm (0.999996±0.000002) were small. Radiation dose reduction and IR strength tended to affect the fracture characteristics. Both the structural image quality and fracture characteristics were not affected by the use of the deep learning algorithm. In conclusion, evidence is provided for the feasibility of using the structural similarity index metric for the analysis of structural image quality and fracture characteristics.
Training recurrent neural networks on long texts, in particular scholarly documents, causes problems for learning. While hierarchical attention networks (HANs) are effective in solving these problems, they still lose important information about the structure of the text. To tackle these problems, we propose the use of HANs combined with structure-tags which mark the role of sentences in the document. Adding tags to sentences, marking them as corresponding to title, abstract or main body text, yields improvements over the stateof-the-art for scholarly document quality prediction. The proposed system is applied to the task of accept/reject prediction on the Peer-Read dataset and compared against a recent BiLSTM-based model and joint textual+visual model as well as against plain HANs. Compared to plain HANs, accuracy increases on all three domains. On the computation and language domain our new model works best overall, and increases accuracy 4.7% over the best literature result. We also obtain improvements when introducing the tags for prediction of the number of citations for 88k scientific publications that we compiled from the Allen AI S2ORC dataset. For our HAN-system with structure-tags we reach 28.5% explained variance, an improvement of 1.8% over our reimplementation of the BiLSTM-based model as well as 1.0% improvement over plain HANs.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.