2022
DOI: 10.1016/j.patrec.2022.01.008
|View full text |Cite
|
Sign up to set email alerts
|

Self-supervised representation learning for detection of ACL tear injury in knee MR videos

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 10 publications
(3 citation statements)
references
References 13 publications
0
3
0
Order By: Relevance
“…After proofing the effectiveness of using the preprocessing in our work, we need to validate it more through the outcomes with some other modern approaches. In this study, the outcomes are compared with Convolutional Neural Network (CNN) 34 , Inception-v3 35 , Deep Belief Networks and Improved Honey Badger Algorithm (DBN/IHBA) 22 , integration of the CNN with an Amended Cooking Training-based Optimizer version (CNN/ACTO) 36 , Self-Supervised Representation Learning (SSRL) 37 .…”
Section: Resultsmentioning
confidence: 99%
“…After proofing the effectiveness of using the preprocessing in our work, we need to validate it more through the outcomes with some other modern approaches. In this study, the outcomes are compared with Convolutional Neural Network (CNN) 34 , Inception-v3 35 , Deep Belief Networks and Improved Honey Badger Algorithm (DBN/IHBA) 22 , integration of the CNN with an Amended Cooking Training-based Optimizer version (CNN/ACTO) 36 , Self-Supervised Representation Learning (SSRL) 37 .…”
Section: Resultsmentioning
confidence: 99%
“…Innate relationship was used in 15 out of 79 studies (Table 1 ). Nine of these studies designed their innate relationship pre-text task based on different image transformations, including rotation prediction 44 – 47 , horizontal flip prediction 48 , reordering shuffled slices 49 , and patch order prediction 46 , 50 52 . Notably, Jiao et al pre-trained their models simultaneously with two innate relationship pre-text tasks (slice order prediction and geometric transformation prediction), and showed that a weight-sharing Siamese network out-performs a single disentanged model for combining the two pre-training objectives 53 .…”
Section: Resultsmentioning
confidence: 99%
“…Due to the big difference between the distribution of medical images and natural images, how to effectively apply the existing SSL frameworks to solve medical image analysis tasks has become a research hotspot. While some works have attempted to design domain-specific pretext tasks 42 – 46 , other works try to exploit improved version of the existing advanced contrastive learning frameworks to medical data 13 , 14 , 16 , 47 – 53 .…”
Section: Related Workmentioning
confidence: 99%