2016
DOI: 10.17743/jaes.2016.0044
|View full text |Cite
|
Sign up to set email alerts
|

Automatic Soundscape Affect Recognition Using A Dimensional Approach

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
23
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 20 publications
(24 citation statements)
references
References 0 publications
1
23
0
Order By: Relevance
“…For both arousal and valence, the loudness feature set performs well for all three datasets. This corresponds to the finds in previous studies [7,19]. Whereas rhythmic features perform better for arousal prediction of WCMED and valence prediction of CCMED, a paired t-test shows it is not significant.…”
Section: Feature Analysissupporting
confidence: 89%
See 2 more Smart Citations
“…For both arousal and valence, the loudness feature set performs well for all three datasets. This corresponds to the finds in previous studies [7,19]. Whereas rhythmic features perform better for arousal prediction of WCMED and valence prediction of CCMED, a paired t-test shows it is not significant.…”
Section: Feature Analysissupporting
confidence: 89%
“…Previous studies indicate arousal is easier to predict [7,14]. For both soundscape and music, arousal is determined by the level of eventfulness and activation.…”
Section: Discussionmentioning
confidence: 98%
See 1 more Smart Citation
“…Moreover, as in the first experiment, the focus was to observe the influence of the temporal structure in an environment, so sound markers or events have been removed. Such events, as semantic content, are suspected to significantly influence the overall rating of the sound environment [43]. These events and markers, present in the second experiment, might have masked the recency and trend effects that were observed in the first experiment.…”
mentioning
confidence: 91%
“…Management of audio data typically involves assigning textual descriptors and allocating audio to a predefined category. Previous novel approaches to the problem of organising audio data into categories include: Augmenting the WordNet framework [1,2] with audio concepts in order to classify sounds [3,4]; using Gaver's [5] taxonomy based upon the mechanical properties of sound-causing events in an audio retrieval system [6]; classifying urban noise complaints [7]; classification by affect ratings [8]; and using hyponym generation from * This work was supported by EPSRC grant EP/N014111/1 'Making Sense of Sounds' and by European Commission H2020 research and innovation grant 688382 'AudioCommons'. web text with subsequent manual refinement [9].…”
Section: Introductionmentioning
confidence: 99%