2021
DOI: 10.1101/2021.11.17.468809
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

DeepProjection: Rapid and structure-specific projections of tissue sheets embedded in 3D microscopy stacks using deep learning

Abstract: The efficient extraction of local high-resolution content from massive amounts of imaging data remains a serious and unsolved problem in studies of complex biological tissues. Here we present DeepProjection, a trainable projection algorithm based on deep learning. This algorithm rapidly and robustly extracts image content contained in curved manifolds from time-lapse recorded 3D image stacks by binary masking of background content, stack by stack. The masks calculated for a given movie, when predicted, e.g., o… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 6 publications
(9 citation statements)
references
References 23 publications
0
6
0
Order By: Relevance
“…Folders of individual TIFFs were imported into FIJI (ImageJ reference) as a virtual stack, and then initial projections were calculated using z-projection with the maximum and median. For subsequent classification using predictive modelling, stacks were projected using the FIJI plugin for local z projection (LZP) (https://biii.eu/local-z-projector), an optimal method for structure-specific projections that can be computed rapidly (Haertter et al, 2021). LZP was run in default settings for these stacks for the reference surface, with a max of the mean method with a 21 pixel neighbourhood search size and a 41 pixel median post-filter size, and using MIP to extract the projection.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Folders of individual TIFFs were imported into FIJI (ImageJ reference) as a virtual stack, and then initial projections were calculated using z-projection with the maximum and median. For subsequent classification using predictive modelling, stacks were projected using the FIJI plugin for local z projection (LZP) (https://biii.eu/local-z-projector), an optimal method for structure-specific projections that can be computed rapidly (Haertter et al, 2021). LZP was run in default settings for these stacks for the reference surface, with a max of the mean method with a 21 pixel neighbourhood search size and a 41 pixel median post-filter size, and using MIP to extract the projection.…”
Section: Methodsmentioning
confidence: 99%
“…One way to address this is to project 3D information into 2D, which can then be further analysed using various types of machine learning approaches. In this work, we have chosen to leverage the LZP projection, which is the latest fast approaches for projecting a 3D image into 2D while preserving information (Haertter et al, 2021).…”
Section: Introductionmentioning
confidence: 99%
“…We used time-lapse confocal microscopy to image the entire dorsal closure process in E-cadherin-GFP embryos. We then used our custom machine-learning-based cell segmentation and tracking algorithm to create time series of cell centroid position, area, perimeter, aspect ratio, and individual junction contour lengths for every cell in the AS (32). At the onset of closure we find that cells in the AS exhibit considerable variability of the cell shape index qi = pi/ √ a i (Fig.…”
Section: Modeling and Experimental Analysismentioning
confidence: 99%
“…Alternatively, one can segment the surfaces of interest by using supervised machine learning tools such as the software solutions Weka [14] or Ilastik [15], as proposed in the ImSAnE surface reconstruction framework [16]. A deep learning approach, using a network of the U-net type to segment the pixels belonging to a single surface of interest, has also recently been reported [17]. While promising as they can provide state of the art segmentations of epithelial surfaces in difficult imaging conditions, machine learning approaches require the prior manual annotation of a sufficiently large set of surfaces to generate suitable training sets.…”
Section: Introductionmentioning
confidence: 99%