2018
DOI: 10.1002/jbio.201800146
|View full text |Cite
|
Sign up to set email alerts
|

Segmentation of Drosophila heart in optical coherence microscopy images using convolutional neural networks

Abstract: Convolutional neural networks (CNNs) are powerful tools for image segmentation and classification. Here, we use this method to identify and mark the heart region of Drosophila at different developmental stages in the cross-sectional images acquired by a custom optical coherence microscopy (OCM) system. With our well-trained CNN model, the heart regions through multiple heartbeat cycles can be marked with an intersection over union of ~86%. Various morphological and dynamical cardiac parameters can be quantifie… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
20
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 14 publications
(20 citation statements)
references
References 51 publications
0
20
0
Order By: Relevance
“…Different from traditional data augment techniques as used in Refs. , in which only affine transformation was utilized which functions weakly in this case, in this paper, by observing the samples in the collected datasets, we postulate that the distribution of data has some invariance with respect to not only affine transformations, but also elastic deformations caused by the adhesion between cells and the growth of cell itself. So, in the experiment, not only affine transformations such as flip from horizontal (FlipH), flip vertical (FlipV), Rotation (range from −180° to 180°), are employed, but also a kind of elastic transformation (ET) used in Refs.…”
Section: Resultsmentioning
confidence: 95%
See 3 more Smart Citations
“…Different from traditional data augment techniques as used in Refs. , in which only affine transformation was utilized which functions weakly in this case, in this paper, by observing the samples in the collected datasets, we postulate that the distribution of data has some invariance with respect to not only affine transformations, but also elastic deformations caused by the adhesion between cells and the growth of cell itself. So, in the experiment, not only affine transformations such as flip from horizontal (FlipH), flip vertical (FlipV), Rotation (range from −180° to 180°), are employed, but also a kind of elastic transformation (ET) used in Refs.…”
Section: Resultsmentioning
confidence: 95%
“…For evaluation metrics, 6 measures are used in the experiment, which consist of 3 commonly used measure scores in deep learning‐based methods , namely Precision, Dice coefficient (Dice) and mean Intersection over Union (mIoU), respectively, and other 3 commonly used metrics for traditional segmentation methods , namely false positive rate (FPR), false negative rate (FNR) and misclassification error (ME), respectively. All 6 metrics are defined as in Eqs.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…Similar to Figure , the nonlinear multimodal image is composed of CARS, TPEF and SHG signal. The images are reprinted from references with permissions. AE, auto‐encoder; CARS, coherent anti‐stokes Raman scattering; SHG, second‐harmonic generation; TPEF, two‐photon excited fluorescence microscopy…”
Section: Deep Learning—an Overviewmentioning
confidence: 99%