2021
DOI: 10.1016/j.crmeth.2021.100105
|View full text |Cite
|
Sign up to set email alerts
|

A deep learning-based segmentation pipeline for profiling cellular morphodynamics using multiple types of live cell microscopy

Abstract: MOTIVATION Quantitative studies of cellular morphodynamics rely on extracting leading-edge velocity time series based on accurate cell segmentation from live cell imaging. However, live cell imaging has numerous challenging issues regarding accurate edge localization. Fluorescence live cell imaging produces noisy and low-contrast images due to phototoxicity and photobleaching. While phase contrast microscopy is gentle to live cells, it suffers from the halo and shade-off artifacts that cannot be h… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
25
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7

Relationship

1
6

Authors

Journals

citations
Cited by 21 publications
(25 citation statements)
references
References 61 publications
0
25
0
Order By: Relevance
“…MARS-Net ( Jang et al., 2021 ) takes a transfer learning approach ( Bertasius et al, 2015 ; Iglovikov et al, 2018 ; Kim et al., 2018 ; Long et al, 2015 ; Vaidyanathan et al., 2021 ) by integrating ImageNet-pretrained VGG19 encoder and U-Net decoder with additional dropout layers ( Deng et al., 2009 ; Ronneberger et al., 2015 ; Simonyan and Zisserman, 2015 ; Srivastavanitish et al., 2014 ), trained on the datasets from multiple types of microscopy. MARS-Net accepts the segmented images generated in Part 3 of this protocol.…”
Section: Step-by-step Methods Detailsmentioning
confidence: 99%
See 3 more Smart Citations
“…MARS-Net ( Jang et al., 2021 ) takes a transfer learning approach ( Bertasius et al, 2015 ; Iglovikov et al, 2018 ; Kim et al., 2018 ; Long et al, 2015 ; Vaidyanathan et al., 2021 ) by integrating ImageNet-pretrained VGG19 encoder and U-Net decoder with additional dropout layers ( Deng et al., 2009 ; Ronneberger et al., 2015 ; Simonyan and Zisserman, 2015 ; Srivastavanitish et al., 2014 ), trained on the datasets from multiple types of microscopy. MARS-Net accepts the segmented images generated in Part 3 of this protocol.…”
Section: Step-by-step Methods Detailsmentioning
confidence: 99%
“…The visualization codes in this Part show the cross-validation results by averaging the evaluation results from the segmentation of each movie. For more details on what each plot describes, refer to the MARS-Net ( Jang et al., 2021 ). CRITICAL: Correspondence Algorithm( Arbelaez et al., 2010 ) necessary for calculating F1, precision and recall is only supported in Linux.…”
Section: Step-by-step Methods Detailsmentioning
confidence: 99%
See 2 more Smart Citations
“…Jang et al developed a MARS-Net (an accurate and robust segmentation network on multiple microscopy type) to localize cell edges to study morphological dynamics in migrating cells. 118 MARS-Net contains a pre-trained VGG19 encoder with a U-Net decoder and dropout layers. This network was trained on time-lapse images using three different types of microscopic imaging modes: phase-contrast, spinning-disk confocal, and total internal reflection fluorescence microscopy.…”
Section: Deep Learning In Cell Migration Researchmentioning
confidence: 99%