2017 IEEE International Conference on Computer Vision Workshops (ICCVW) 2017
DOI: 10.1109/iccvw.2017.11
|View full text |Cite
|
Sign up to set email alerts
|

Spheroid Segmentation Using Multiscale Deep Adversarial Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
17
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 15 publications
(17 citation statements)
references
References 6 publications
0
17
0
Order By: Relevance
“…Sadanandan et al , showed that the dimensions of U‐Net could be reduced by combining raw images with images pre‐filtered with task specific hand engineered filters, achieving robust segmentation of Escherichia coli and mouse mammary cells in both phase contrast and fluorescence images. In another article Sadanandan et al combined CNNs and GANs for segmenting spheroid cell clusters in bright field images. Rather than forcing the GAN to re‐create synthetic images, it was recursively used to improve a set of manually drawn segmentation masks, achieving performance gains over a baseline CNN segmentation architecture.…”
Section: Deep Learning For Image Cytometrymentioning
confidence: 99%
“…Sadanandan et al , showed that the dimensions of U‐Net could be reduced by combining raw images with images pre‐filtered with task specific hand engineered filters, achieving robust segmentation of Escherichia coli and mouse mammary cells in both phase contrast and fluorescence images. In another article Sadanandan et al combined CNNs and GANs for segmenting spheroid cell clusters in bright field images. Rather than forcing the GAN to re‐create synthetic images, it was recursively used to improve a set of manually drawn segmentation masks, achieving performance gains over a baseline CNN segmentation architecture.…”
Section: Deep Learning For Image Cytometrymentioning
confidence: 99%
“…This differs from aforementioned works in the sense that the additional loss term is being learned by the discriminator rather than having fixed hand-crafted loss terms. The same mechanism was later applied to image-to-image translation [20], medical image analysis [5,6,7,8,16,21,22,23] and other segmentation tasks [24]. In contrast to our work, this formulation of adversarial training does not use the pairing information of images and labels.…”
Section: Related Workmentioning
confidence: 99%
“…Several methods have used GAN for microscopy segmentation [2,15,19]. [2] applied GAN by proposing a multiple input architecture in the discriminators of GAN, which takes as input both microscopy images and corresponding annotated segmentation.…”
Section: Related Workmentioning
confidence: 99%
“…[2] applied GAN by proposing a multiple input architecture in the discriminators of GAN, which takes as input both microscopy images and corresponding annotated segmentation. [15] extended GAN by replacing the generative model with a multi-scale segmentation network. One limitation of these method, however, is that they still require manually annotated images during training.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation