2020
DOI: 10.1016/j.knosys.2019.105343
|View full text |Cite
|
Sign up to set email alerts
|

Autoencoder based sample selection for self-taught learning

Abstract: Self-taught learning is a technique that uses a large number of unlabeled data as source samples to improve the task performance on target samples. Compared with other transfer learning techniques, self-taught learning can be applied to a broader set of scenarios due to the loose restrictions on source data. However, knowledge transferred from source samples that are not sufficiently related to the target domain may negatively influence the target learner, which is referred to as negative transfer. In this pap… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
5
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
4

Relationship

1
9

Authors

Journals

citations
Cited by 13 publications
(5 citation statements)
references
References 41 publications
0
5
0
Order By: Relevance
“…Convolutional neural network (CNN) tests are also suggested, which have great potential for generalization in areas not seen before, as they use contextual information and are not strongly affected by absolute pixel values [85]. Other suggested approaches to reduce the overfitting problem are endless learning [86] and self-taught learning [87], which can be tested in the future.…”
Section: Discussionmentioning
confidence: 99%
“…Convolutional neural network (CNN) tests are also suggested, which have great potential for generalization in areas not seen before, as they use contextual information and are not strongly affected by absolute pixel values [85]. Other suggested approaches to reduce the overfitting problem are endless learning [86] and self-taught learning [87], which can be tested in the future.…”
Section: Discussionmentioning
confidence: 99%
“…Unsupervised domain adaptation assumes that only unlabeled data are available in the target domain. The core idea is to find a shared feature space which can reduce the domain distribution divergence (Lee et al 2007;Qiu et al 2017;Wang and Mahadevan 2008;Jiang et al 2019;Feng, Yu, and Duarte 2020). Pan et al (Pan et al 2009) and Long et al (Long et al 2015) proposed to minimize the Maximum Mean Discrepancies (MMD) to align the domain distributions.…”
Section: Application Descriptionmentioning
confidence: 99%
“…The autoencoder was also optimized using gradient-based learning. Lastly, sparse autoencoder networks have achieved remarkable performance in representation learning [21], [22]. However, better representation learning can be gotten when multiple sparse autoencoders are stacked and optimized effectively, which is the focus of this research.…”
Section: Related Workmentioning
confidence: 99%