2007 IEEE International Conference on Systems, Man and Cybernetics 2007
DOI: 10.1109/icsmc.2007.4414136
|View full text |Cite
|
Sign up to set email alerts
|

Automatic music genre classification using ensemble of classifiers

Abstract: The version in the Kent Academic Repository may differ from the final published version. Users are advised to check http://kar.kent.ac.uk for the status of the paper. Users should always cite the published version of record.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
33
0
3

Year Published

2008
2008
2022
2022

Publication Types

Select...
5
4

Relationship

1
8

Authors

Journals

citations
Cited by 44 publications
(36 citation statements)
references
References 19 publications
0
33
0
3
Order By: Relevance
“…Despite the complex situation that emerges in the problem of automatic genre classification [41][42][43][44], our model is very simple. From the qualitative point of view, the characteristic of songs and music genres is related with multidimensional aspects like timbre, melody, harmony, rhythm, among others.…”
mentioning
confidence: 99%
“…Despite the complex situation that emerges in the problem of automatic genre classification [41][42][43][44], our model is very simple. From the qualitative point of view, the characteristic of songs and music genres is related with multidimensional aspects like timbre, melody, harmony, rhythm, among others.…”
mentioning
confidence: 99%
“…No mesmo ano, Kosina desenvolveu em [5] um sistema de classificação de gêneros musicais, chamado MUGRAT, obtendo aproximadamente 88% de reconhecimento para três gêneros musicais. Silla et al [11] realizaram uma análise diferente na qual, em vez de extraírem informações de toda a música, dividiram o áudio em três partes (começo, meio e fim, com trechos de 30 segundos cada), utilizando classificadores diferentes para cada parte e combinando os resultados. Nesse trabalho, foi alcançada uma taxa média de acerto 3% maior do que o melhor resultado obtido individualmente (55,15% sem a combinação contra 58,07%, utilizando três segmentos, para dez gêneros musicais).…”
Section: Trabalhos Correlatosunclassified
“…In this table we employ a binary BME mask -for (B)eginning, (M)iddle and (E)nd time segmentswhere 1 indicates that the feature was selected in the corresponding time segment, and 0 otherwise. [31], [33] and [34]. We remember that features 1 to 6 are Beat related, 7 to 25 are related to Timbral Texture, and 26 to 30 are Pitch related.…”
Section: Methodsmentioning
confidence: 99%
“…We use feature space decomposition following the OAA and RR approaches, and also features extracted from different time segments [31], [33], [34]. Therefore several feature vectors and component classifiers are used in each music part, and a combination procedure is employed to produce the final class label for the music.…”
Section: The Space-time Decomposition Approachmentioning
confidence: 99%