2018
DOI: 10.1007/s11042-018-6463-x
|View full text |Cite
|
Sign up to set email alerts
|

A brief review on multi-task learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
82
0
2

Year Published

2019
2019
2024
2024

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 181 publications
(84 citation statements)
references
References 70 publications
0
82
0
2
Order By: Relevance
“…In this work, we introduce a learning scheme, termed multi-task (MT) SISSO, within the framework of the wider class of learning schemes known as multi-task learning (MTL) [32][33][34][35][36][37][38][39]. A task for a learning algorithm is the learning of a target property starting from a single input source (set of features).…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations
“…In this work, we introduce a learning scheme, termed multi-task (MT) SISSO, within the framework of the wider class of learning schemes known as multi-task learning (MTL) [32][33][34][35][36][37][38][39]. A task for a learning algorithm is the learning of a target property starting from a single input source (set of features).…”
Section: Introductionmentioning
confidence: 99%
“…A task for a learning algorithm is the learning of a target property starting from a single input source (set of features). The learning of multiple tasks (or MTL) is an umbrella term that refers to [38] (i) the learning of multiple target properties using a single input source, or (ii) the joint learning of a single target property using multiple input sources, or (iii) a mixture of both.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Multitask learning (MTL) is an active area of research in machine learning and is concerned with simultaneously learning multiple related tasks from a common dataset [21]. Inspired by this idea, many methods have been developed to determine multiple tasks simultaneously rather than separately [22]. All these techniques have the tendency to jointly determine multiple tasks within a common environment, which can complicate the learning process, but can enhance the performance of single-task learning models.…”
Section: Introductionmentioning
confidence: 99%
“…A possible method, which recently attracted great interest, is the use of feed-forward neural network (FFNN) architectures, with a certain number of hidden layers and an appropriate number of output neurons, each responsible for predicting the desired variables y i with i = 1, ..., d. In the example of oxygen sensing, the output layer would have a neuron for the oxygen concentration [O 2 ] and one for the temperature T. This work shows that, since the output neurons must use the same features (the output of the last hidden layer) for all variables [10,11], FFNNs are insufficiently flexible. For the cases when the variables depend on fundamentally different ways from the inputs, this approach will give a result that is at best acceptable, and at worst unusable.…”
Section: Introductionmentioning
confidence: 99%