2022
DOI: 10.1371/journal.pcbi.1009879
|View full text |Cite
|
Sign up to set email alerts
|

Benchmarking of deep learning algorithms for 3D instance segmentation of confocal image datasets

Abstract: Segmenting three-dimensional (3D) microscopy images is essential for understanding phenomena like morphogenesis, cell division, cellular growth, and genetic expression patterns. Recently, deep learning (DL) pipelines have been developed, which claim to provide high accuracy segmentation of cellular images and are increasingly considered as the state of the art for image segmentation problems. However, it remains difficult to define their relative performances as the concurrent diversity and lack of uniform eva… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

2
19
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 22 publications
(21 citation statements)
references
References 45 publications
2
19
0
Order By: Relevance
“…To some extent, the decrease in fluorescence level in depth on raw images should not be a major issue for predicting feature boundaries with Cellpose as it uses vector gradients representation of objects to accurately predict complex cell outlines with non-homogenous cell marker distribution (Stringer et al, 2021). However, our result indicates that the SNR is an important prerequisite for image analysis with Cellpose, in line with previous observations (Kar et al, 2021). Along with an enhanced visualization of the structures of interest across the sample, the pre-processing of 3D images therefore allows for homogenization of the data set and much more efficient 3D segmentation with Cellpose, thus increasing the reproducibility and quality of analysis.…”
Section: Discusssionsupporting
confidence: 93%
See 1 more Smart Citation
“…To some extent, the decrease in fluorescence level in depth on raw images should not be a major issue for predicting feature boundaries with Cellpose as it uses vector gradients representation of objects to accurately predict complex cell outlines with non-homogenous cell marker distribution (Stringer et al, 2021). However, our result indicates that the SNR is an important prerequisite for image analysis with Cellpose, in line with previous observations (Kar et al, 2021). Along with an enhanced visualization of the structures of interest across the sample, the pre-processing of 3D images therefore allows for homogenization of the data set and much more efficient 3D segmentation with Cellpose, thus increasing the reproducibility and quality of analysis.…”
Section: Discusssionsupporting
confidence: 93%
“…Albeit minor, these errors occur in highly oocyte-dense regions or with non-optimal signal levels. Such observation is in agreement with some studies that do not recommend Cellpose for highly overlapping masks or that describe lower accuracy with over- or underexposed images (Kar et al, 2021). This could be attributed to the 2D averaging process for the 3D Cellpose extension that may have lower accuracy than a model trained with 3D data, especially for highly dense regions (Lalit et al,2022; Stringer et al, 2021).…”
Section: Discusssionsupporting
confidence: 93%
“…Cell and tissue imaging and image analysis can be instrumental to measure and quantify complex cell phenotypes in time and space. There has been a rise in the development of automatic and semi-automatic algorithms that specialise in the segmentation of two and threedimensional cell cultures, including those based on labelfree images [13,[51][52][53][54].…”
Section: Discussionmentioning
confidence: 99%
“…Many biomedical image analyses utilize convolutional neural networks for the identification of objects of interest in their images, in part due to their ability to learn and extract important features in the local receptive fields of stacked convolutions [3,23]. Many of such applications take advantage of two particular architectures, including region-based networks, which propose object regions in an image for downstream segmentation, and U-Net based architectures, which contain an encoder-decoder style network that extracts features and spatial information to construct object segmentations [18].…”
Section: Introductionmentioning
confidence: 99%