2022
DOI: 10.1109/access.2022.3189163
|View full text |Cite
|
Sign up to set email alerts
|

Multi-Objective Deep Network-Based Estimation of Distribution Algorithm for Music Composition

Abstract: In the field of evolutionary algorithm music composition, most of the current researches focus on how to enhance environmental selection based on multi-objective evolutionary algorithms (MOEAs). However, the real music composition process defined as large-scale multi-optimization problems (LSMOP) involve the number of combinations, and the existing MOEA-based optimization process can be challenging to effectively explore the search space. To address this issue, we propose a new Multi-Objective Generative Deep … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 30 publications
0
2
0
Order By: Relevance
“…In most EDAs that do not restrict the dependence relationships between variables, the joint probability distribution is estimated by a Bayesian network (Section II-C) learned from data. EDAs have also been developed with the probability distribution estimated from log-linear probability models [39], probabilistic principal component analysis [40], Kikuchi approximations [41], Markov networks [42], [43], Markov chains [44], copulas and vines [45], a reinforcement learningbased method [46], Gaussian adaptive resonance theory neural networks [47], growing neural gas networks [48], restricted Boltzmann machines [49], [50], [51] and in the deep learning area, from autoencoders [52], variational autoencoders [53], [54], and generative adversarial networks [55]. Model selection in EDAs is a more complex problem.…”
Section: Initial Population Of Candidate Solutionsmentioning
confidence: 99%
See 1 more Smart Citation
“…In most EDAs that do not restrict the dependence relationships between variables, the joint probability distribution is estimated by a Bayesian network (Section II-C) learned from data. EDAs have also been developed with the probability distribution estimated from log-linear probability models [39], probabilistic principal component analysis [40], Kikuchi approximations [41], Markov networks [42], [43], Markov chains [44], copulas and vines [45], a reinforcement learningbased method [46], Gaussian adaptive resonance theory neural networks [47], growing neural gas networks [48], restricted Boltzmann machines [49], [50], [51] and in the deep learning area, from autoencoders [52], variational autoencoders [53], [54], and generative adversarial networks [55]. Model selection in EDAs is a more complex problem.…”
Section: Initial Population Of Candidate Solutionsmentioning
confidence: 99%
“…Most of these algorithms, EDAs included, simplify the problem by reducing the m-dimensional space to a scalar value with fitness functions like the convergence indicator, the Pareto-optimal front coverage indicator, the hypervolume indicator and the unary additive -indicator. This is the strategy followed by EDAs based on neural networks [47], [48], [54], [51], on probabilistic models [82], [103], [107] or on a Parzen estimator [108].…”
Section: E Multiobjective Edasmentioning
confidence: 99%