Proceedings of the 9th Annual Conference on Genetic and Evolutionary Computation 2007
DOI: 10.1145/1276958.1277289
|View full text |Cite
|
Sign up to set email alerts
|

Learning noise

Abstract: In this paper we propose a genetic programming approach to learning stochastic models with unsymmetrical noise distributions. Most learning algorithms try to learn from noisy data by modeling the maximum likelihood output or least squared error, assuming that noise effects average out. While this process works well for data with symmetrical noise distributions (such as Gaussian observation noise), many real-life sources of noise are not symmetrically distributed, thus this approach does not hold. We suggest im… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
9
0

Year Published

2009
2009
2023
2023

Publication Types

Select...
6
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 17 publications
(9 citation statements)
references
References 10 publications
0
9
0
Order By: Relevance
“…After this point, five of the seven states can be modeled exactly up to 30% noise, and four of the seven at 50% noise. It is worth noting that many real biological systems may also contain different types of noise, such as asymmetrical noise, and a modified equation search method could be used in these cases, such as explicitly including symbolic noise sources in the ODE model [72]. …”
Section: Methodsmentioning
confidence: 99%
“…After this point, five of the seven states can be modeled exactly up to 30% noise, and four of the seven at 50% noise. It is worth noting that many real biological systems may also contain different types of noise, such as asymmetrical noise, and a modified equation search method could be used in these cases, such as explicitly including symbolic noise sources in the ODE model [72]. …”
Section: Methodsmentioning
confidence: 99%
“…Modern approaches to symbolic regression make use of sophisticated techniques to speed up convergence and improve the quality of results: competitive co-evolution [20], incorporation of expert knowledge [21], noise modeling [22], and, most notably, Pareto-like optimization [23]. Taking inspiration from works on multi-objective optimization, Pareto-like symbolic regression evaluates candidate solutions on more than one feature, for example fitting and complexity: instead of returning the user a single, optimal solution, such algorithms will show a Pareto front comprising several solutions, each one an optimal trade-off between the two objectives.…”
Section: Symbolic Regressionmentioning
confidence: 99%
“…In spite of the fact that these models utilise the data as it arrives, there can still arise situations where the underlying assumptions of the model no longer hold. We call such settings dynamic environments, where changes in data distribution [1], change in features' relevance [2], non-symmetrical noise levels [3] are common. It has been shown that many changes in the environment which are no longer being reflected in the model contribute to the deterioration of model's accuracy over time [4]- [7].…”
Section: Introductionmentioning
confidence: 99%