2023
DOI: 10.1007/978-3-031-33380-4_13
|View full text |Cite
|
Sign up to set email alerts
|

Vision Transformers for Small Histological Datasets Learned Through Knowledge Distillation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
10
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2
1

Relationship

2
4

Authors

Journals

citations
Cited by 10 publications
(10 citation statements)
references
References 22 publications
0
10
0
Order By: Relevance
“…To enhance the generalizability of DL models, incremental learning may be used to tune models on other public datasets, such as PICTURE and PROSTA-TEx [28]. To increase the precision of PCa grading, future studies can concentrate on integrating vision transformers (ViTs) [12]. Perhaps in conjunction with data from other modalities, mpMRI can potentially improve PCa diagnosis substantially using DL models.…”
Section: Discussionmentioning
confidence: 99%
See 4 more Smart Citations
“…To enhance the generalizability of DL models, incremental learning may be used to tune models on other public datasets, such as PICTURE and PROSTA-TEx [28]. To increase the precision of PCa grading, future studies can concentrate on integrating vision transformers (ViTs) [12]. Perhaps in conjunction with data from other modalities, mpMRI can potentially improve PCa diagnosis substantially using DL models.…”
Section: Discussionmentioning
confidence: 99%
“…The future of automated PCa grading DL models will benefit from vision transformers (ViTs) [4] and their multi-attention [36] to the spatial correlation of mpMRI images. This work can be further extended by combining ViTs and CNNs for PCa grading tasks, possibly in combination with data from different modalities.…”
Section: Limitations and Future Workmentioning
confidence: 99%
See 3 more Smart Citations