2023
DOI: 10.1371/journal.pone.0281931
|View full text |Cite
|
Sign up to set email alerts
|

Small hand-designed convolutional neural networks outperform transfer learning in automated cell shape detection in confluent tissues

Abstract: Mechanical cues such as stresses and strains are now recognized as essential regulators in many biological processes like cell division, gene expression or morphogenesis. Studying the interplay between these mechanical cues and biological responses requires experimental tools to measure these cues. In the context of large scale tissues, this can be achieved by segmenting individual cells to extract their shapes and deformations which in turn inform on their mechanical environment. Historically, this has been d… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
2
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 56 publications
0
2
0
Order By: Relevance
“…Furthermore, these studies utilize transfer learning, advantageous for achieving rapid and satisfactory results with limited datasets. Ad-hoc CNNs may outperform transfer learning when trained specifically for a particular solution 33 , 34 . Transfer learning, often relying on pretraining with non-medical images, carries the risk of inheriting undesired patterns or biases, potentially diminishing accuracy in addressing specific issues.…”
Section: Discussionmentioning
confidence: 99%
“…Furthermore, these studies utilize transfer learning, advantageous for achieving rapid and satisfactory results with limited datasets. Ad-hoc CNNs may outperform transfer learning when trained specifically for a particular solution 33 , 34 . Transfer learning, often relying on pretraining with non-medical images, carries the risk of inheriting undesired patterns or biases, potentially diminishing accuracy in addressing specific issues.…”
Section: Discussionmentioning
confidence: 99%
“…The first stage comprises 128 filters; this number is doubled for each subsequent block. After the 4th block, we use 2 regression heads on the flattened features: one scalar l for the length, and one vector (cos(2 α ), sin(2 α )) for the orientation, which we preferred to the scalar α , as to avoid issues related to 0 and π representing the same orientation, while appearing distant in the classical MSE [10]. We used data augmentation and training protocols similar to the one described for the U-Net.…”
Section: Methodsmentioning
confidence: 99%