2017
DOI: 10.2514/1.i010437
|View full text |Cite
|
Sign up to set email alerts
|

Forward Adaptive Transfer of Gaussian Process Regression

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
15
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 18 publications
(15 citation statements)
references
References 16 publications
0
15
0
Order By: Relevance
“…Recently, Wei et al [6] investigate the extension of T λ into the multiple-source transfer scenario and demonstrate a similar pathology. Wagle et al [27] further incorporate T λ with cross-domain matrices [28] to construct a new PSD transfer kernel. In [29], the authors propose a fast and scalable GP model based on T λ for the large source scenario.…”
Section: Related Workmentioning
confidence: 99%
“…Recently, Wei et al [6] investigate the extension of T λ into the multiple-source transfer scenario and demonstrate a similar pathology. Wagle et al [27] further incorporate T λ with cross-domain matrices [28] to construct a new PSD transfer kernel. In [29], the authors propose a fast and scalable GP model based on T λ for the large source scenario.…”
Section: Related Workmentioning
confidence: 99%
“…In [6], Cao et al propose an adaptive transfer model that utilizes a single similarity coefficient to reweight the covariance of the source and target domains, i.e., TGP. In [26], Wagle et al follow [6], and improve the transfer model by further distinguishing the source-to-source covariance from the target-to-target covariance. Wei et al [32] investigates an extension of [6] into the multiple-source transfer scenario and demonstrate a pathology.…”
Section: Related Workmentioning
confidence: 99%
“…Although the proposed mTGP is based on a specific explicit relatedness modelling, i.e., TGP [6], the idea of succinct manifold learning, specifically Φ(•) , is applicable to other similar methods with explicit relatedness representation, e.g., [26,31,32]. The reason why we choose to build on TGP is because it is the very first fundamental method for explicit modelling of relatedness, and the following works [26,31,32] are based on it. We then emphasize the importance of the latent manifold learning to the transfer performance by comparing mTGP and TGP that directly works in the original input feature space.…”
Section: Related Workmentioning
confidence: 99%
“…Surrogate-assisted modeling and optimization have been extensively deployed to facilitate modern aeroengine design [1,2,3,4] due to the representational capability of complex features. Among current surrogates (also known as machine learning models), as a non-parametric Bayesian model, Gaussian process (GP) [5] (also known as Kriging or emulator), has gained popularity.…”
Section: Introductionmentioning
confidence: 99%