2008
DOI: 10.1162/evco.2008.16.4.439
|View full text |Cite
|
Sign up to set email alerts
|

Multitask Visual Learning Using Genetic Programming

Abstract: We propose a multitask learning method of visual concepts within the genetic programming (GP) framework. Each GP individual is composed of several trees that process visual primitives derived from input images. Two trees solve two different visual tasks and are allowed to share knowledge with each other by commonly calling the remaining GP trees (subfunctions) included in the same individual. The performance of a particular tree is measured by its ability to reproduce the shapes contained in the training image… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
7
0

Year Published

2009
2009
2021
2021

Publication Types

Select...
3
2
2

Relationship

1
6

Authors

Journals

citations
Cited by 13 publications
(7 citation statements)
references
References 22 publications
0
7
0
Order By: Relevance
“…It is possible to identify three types of GP-based approaches: (1) those that employ GP to detect low-level features which have been predefined by human experts, such as corners or edges [21,44,[60][61][62]67] and recently one regarding vegetation indices used on remote sensing [46,47];…”
Section: Computer Vision Applicationsmentioning
confidence: 99%
“…It is possible to identify three types of GP-based approaches: (1) those that employ GP to detect low-level features which have been predefined by human experts, such as corners or edges [21,44,[60][61][62]67] and recently one regarding vegetation indices used on remote sensing [46,47];…”
Section: Computer Vision Applicationsmentioning
confidence: 99%
“…Let us finally note that the above algorithm shares some common elements with our previous studies on cross-task knowledge reuse [4] and knowledge reuse for visual learning [5], where we have demonstrated that crossing over individuals that solve different visual tasks speeds up the learning. Here, however, we are interested in a scenario where the input to the method is a single task (problem).…”
Section: A the Niching Algorithm (Na)mentioning
confidence: 62%
“…For all other uses, contact the owner/author(s). GECCO '18, July [15][16][17][18][19]2018, Kyoto, Japan Much of the research in deep learning in recent years has focused on coming up with better architectures, and MTL is no exception. As a matter of fact, architecture plays possibly an even larger role in MTL because there are many ways to tie the multiple tasks together.…”
Section: Introductionmentioning
confidence: 99%
“…In the convex optimization setting, this idea has been implemented via various regularization penalties on shared parameter matrices [1,7,18,22]. Evolutionary methods have also had success in MTL, especially in sequential decision-making domains [13,16,19,38,41].…”
mentioning
confidence: 99%