2021
DOI: 10.1016/j.neunet.2020.10.018
|View full text |Cite
|
Sign up to set email alerts
|

FMixCutMatch for semi-supervised deep learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
7
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 21 publications
(7 citation statements)
references
References 13 publications
0
7
0
Order By: Relevance
“…This process continues for several iterations and better prediction results are achieved at the end of the process. Recently, different semi-supervised deep learning algorithms were developed and applied to different problems [ 60 , 78 , 86 , P29 ]. In addition, unsupervised deep learning algorithms can help to reduce the burden of labeled data.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…This process continues for several iterations and better prediction results are achieved at the end of the process. Recently, different semi-supervised deep learning algorithms were developed and applied to different problems [ 60 , 78 , 86 , P29 ]. In addition, unsupervised deep learning algorithms can help to reduce the burden of labeled data.…”
Section: Resultsmentioning
confidence: 99%
“…They applied URL-based, domain-based, path-based, file-based, and query-based features. Wei [ 78 , P28 ] used an embedding layer, three convolutional layers, three max pooling layers, one fully-connected layer to develop the CNN-based phishing detection model. Li et al [ 42 , P13 , P14 ] applied a convolutional later of four 3*1 filters, a max pooling layer, two dense layers, and a softmax output layer for their model.…”
Section: Resultsmentioning
confidence: 99%
“…These modelling assumptions create a paradigm where unlabelled data can be useful for model estimation (Chapelle et al, 2010). There has been a recent revival of interest in semi-supervised learning in the machine learning community due to impressive empirical progress on benchmark image and text classification datasets; for example, Tarvainen and Valpola (2017), Laine and Aila (2017), Miyato et al (2019), Berthelot et al (2019), Xie et al (2019), Berthelot et al (2020), Sohn et al (2020), Wei et al (2021).…”
Section: Brief Overview Of Ssl Approachesmentioning
confidence: 99%
“…Additional unlabelled observations are generated by perturbing the original set of unlabelled observations by adding random noise or transformations such as rotations or translations. Data augmentation is typically coupled with consistency regularization so that similar predictions are encouraged on the original instances and the augmented versions (Berthelot et al, 2019;Nair et al, 2019;Wei et al, 2021). The combined use of small local alterations and more aggressive global changes has been found to be an effective strategy (Sohn et al, 2020).…”
Section: Brief Overview Of Ssl Approachesmentioning
confidence: 99%
See 1 more Smart Citation