Understanding the relationship between people and their soundscapes in an urban context of innumerable and diverse sensory stimulations is a difficult endeavor. What public space users hear and how they evaluate it in relation to their performed or intended activities can influence users’ engagement with their spaces as well as their assessment of suitability of public space for their needs or expectations. While the interaction between the auditory experience and activity is a topic gaining momentum in soundscape research, capturing the complexity of this relationship in context remains a multifaceted challenge. In this paper, we address this challenge by researching the user-soundscape relationships in relation to users’ activities. Building on previous soundscape studies, we explore the role and interaction of three potentially influencing factors in users’ soundscape evaluations: level of social interaction of users’ activities, familiarity and expectations, and we employ affordance theory to research the ways in which users bring their soundscapes into use. To this end, we employ a mixed methods design, combining quantitative, qualitative and spatial analyses to analyze how users of three public spaces in Amsterdam evaluate their soundscapes in relation to their activities. We documented the use of an urban park in Amsterdam through non-intrusive behavioral mapping to collect spatial data on observable categories of activities, and integrated our observations with on site questionnaires on ranked soundscape evaluations and free responses detailing users’ evaluations, collected at the same time from park users. One of our key findings is that solitary and socially interactive respondents evaluate their soundscapes differently in relation to their activities, with the latter offering higher suitability and lower disruption ratings than the former; this points to qualitatively different auditory experiences, analyzed further based on users’ open-ended justifications for their evaluations. We provide a methodological contribution (adding to existing soundscape evaluation methodologies), an empirical contribution (providing insight on how users explain their soundscape evaluations in relation to their activities) and a policy and design-related contribution, offering additional insight on a transferable methodology and process that practitioners can employ in their work on the built environment to address the multisensory experience of public spaces.
TABLE SDC1. Details of records included in the meta-analysis broken down by relevant component studies. Author(s) and year Study n Group Age * (years †) Onset of deafness Age at CI activation (years †) Duration of CI use (years †) Prosody Language Stimuli Cues AFC Measure Comment f0 int dur Agrawal et al. (2012)
When we speak, we can vary how we use our voices. Our speech can be high or low (pitch), loud or soft (loudness), and fast or slow (duration). This variation in pitch, loudness, and duration is called speech prosody. It is a bit like making music. Varying our voices when we speak can express sarcasm or emotion and can even change the meaning of what we are saying. So, speech prosody is a crucial part of spoken language. But how do speakers produce prosody? How do listeners hear and understand these variations? Is it possible to hear and interpret prosody in other languages? And what about people whose hearing is not so good? Can they hear and understand prosodic patterns at all? Let’s find out!
New language technologies are coming, thanks to the huge and competing private investment fuelling rapid progress; we can either understand and foresee their effects, or be taken by surprise and spend our time trying to catch up. This report scketches out some transformative new technologies that are likely to fundamentally change our use of language. Some of these may feel unrealistically futuristic or far-fetched, but a central purpose of this report - and the wider LITHME network - is to illustrate that these are mostly just the logical development and maturation of technologies currently in prototype. But will everyone benefit from all these shiny new gadgets? Throughout this report we emphasise a range of groups who will be disadvantaged and issues of inequality. Important issues of security and privacy will accompany new language technologies. A further caution is to re-emphasise the current limitations of AI. Looking ahead, we see many intriguing opportunities and new capabilities, but a range of other uncertainties and inequalities. New devices will enable new ways to talk, to translate, to remember, and to learn. But advances in technology will reproduce existing inequalities among those who cannot afford these devices, among the world’s smaller languages, and especially for sign language. Debates over privacy and security will flare and crackle with every new immersive gadget. We will move together into this curious new world with a mix of excitement and apprehension - reacting, debating, sharing and disagreeing as we always do. Plug in, as the human-machine era dawns.
A fundamental aspect of learning in biological neural networks is the plasticity property which allows them to modify their configurations during their lifetime. Hebbian learning is a biologically plausible mechanism for modeling the plasticity property in artificial neural networks (ANNs), based on the local interactions of neurons. However, the emergence of a coherent global learning behavior from local Hebbian plasticity rules is not very well understood. The goal of this work is to discover interpretable local Hebbian learning rules that can provide autonomous global learning. To achieve this, we use a discrete representation to encode the learning rules in a finite search space. These rules are then used to perform synaptic changes, based on the local interactions of the neurons. We employ genetic algorithms to optimize these rules to allow learning on two separate tasks (a foraging and a prey-predator scenario) in online lifetime learning settings. The resulting evolved rules converged into a set of well-defined interpretable types, that are thoroughly discussed. Notably, the performance of these rules, while adapting the ANNs during the learning tasks, is comparable to that of offline learning methods such as hill climbing.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.