Background Ecological momentary assessment (EMA) is a method to evaluate hearing aids in everyday life that uses repeated smartphone-based questionnaires to assess a situation as it happens. Although being ecologically valid and avoiding memory bias, this method may be prone to selection biases due to questionnaires being skipped or the phone not being carried along in certain situations. Purpose This investigation analyzed which situations are underrepresented in questionnaire responses and physically measured objective EMA data (e.g., sound level), and how such underrepresentation may depend on different triggers. Method In an EMA study, 20 subjects with hearing impairment provided daily information on reasons for missed data, that is, skipped questionnaires or missing connections between their phone and hearing aids. Results Participants often deliberately did not bring the study phone to social situations or skipped questionnaires because they considered it inappropriate, for example, during church service or when engaging in conversation. They answered fewer questions in conversations with multiple partners and were more likely to postpone questionnaires when not in quiet environments. Conclusion Data for social situations will likely be underrepresented in EMA. However, these situations are particularly important for the evaluation of hearing aids, as individuals with hearing impairment often have difficulties communicating in noisy situations. Thus, it is vital to optimize the design of the study to find a balance between avoiding memory bias and enabling subjects to report retrospectively on situations where phone usage may be difficult. The implications for several applications of EMA are discussed. Supplemental Material https://doi.org/10.23641/asha.12746849
For processing and segmenting visual scenes, the brain is required to combine a multitude of features and sensory channels. It is neither known if these complex tasks involve optimal integration of information, nor according to which objectives computations might be performed. Here, we investigate if optimal inference can explain contour integration in human subjects. We performed experiments where observers detected contours of curvilinearly aligned edge configurations embedded into randomly oriented distractors. The key feature of our framework is to use a generative process for creating the contours, for which it is possible to derive a class of ideal detection models. This allowed us to compare human detection for contours with different statistical properties to the corresponding ideal detection models for the same stimuli. We then subjected the detection models to realistic constraints and required them to reproduce human decisions for every stimulus as well as possible. By independently varying the four model parameters, we identify a single detection model which quantitatively captures all correlations of human decision behaviour for more than 2000 stimuli from 42 contour ensembles with greatly varying statistical properties. This model reveals specific interactions between edges closely matching independent findings from physiology and psychophysics. These interactions imply a statistics of contours for which edge stimuli are indeed optimally integrated by the visual system, with the objective of inferring the presence of contours in cluttered scenes. The recurrent algorithm of our model makes testable predictions about the temporal dynamics of neuronal populations engaged in contour integration, and it suggests a strong directionality of the underlying functional anatomy.
Intracellular studies have revealed the importance of cotuned excitatory and inhibitory inputs to neurons in auditory cortex, but typical spectrotemporal receptive field models of neuronal processing cannot account for this overlapping tuning. Here, we apply a new nonlinear modeling framework to extracellular data recorded from primary auditory cortex (A1) that enables us to explore how the interplay of excitation and inhibition contributes to the processing of complex natural sounds. The resulting description produces more accurate predictions of observed spike trains than the linear spectrotemporal model, and the properties of excitation and inhibition inferred by the model are furthermore consistent with previous intracellular observations. It can also describe several nonlinear properties of A1 that are not captured by linear models, including intensity tuning and selectivity to sound onsets and offsets. These results thus offer a broader picture of the computational role of excitation and inhibition in A1 and support the hypothesis that their interactions play an important role in the processing of natural auditory stimuli.
Understanding and exploiting the abilities of the human visual system is an important part of the design of usable user interfaces and information visualizations. Good design enables quick, easy and veridical perception of key components of that design. An important facet of human vision is its ability to seemingly effortlessly perform "perceptual organization"; it transforms individual feature estimates into perception of coherent regions, structures, and objects. We perceive regions grouped by proximity and feature similarity, grouping of curves by good continuation, and grouping of regions of coherent texture. In this paper, we discuss a simple model for a broad range of perceptual grouping phenomena. It takes as input an arbitrary image, and returns a structure describing the predicted visual organization of the image. We demonstrate that this model can capture aspects of traditional design rules, and predicts visual percepts in classic perceptual grouping displays.
Basic perceptual quality of coded audio material is commonly evaluated using ITU-R BS-1534 MUSHRA (Multi Stimulus with Hidden Reference and Anchors) listening tests. MUSHRA guidelines call for experienced listeners. However, the majority of consumers using the final product are no expert-listeners. Also the degree of expertise in a listening test may vary amongst listeners in the same laboratory. It would be useful to know how the audio quality evaluation differs between trained and untrained listeners and how training and actual tests should be designed in order to be as reliable as possible. To investigate the rating differences between experts and non-experts, we performed MUSHRA listening tests with 13 experienced and 11 inexperienced listeners using 5 speech and audio codecs delivering a wide range of basic audio quality. Except for the hidden reference, absolute ratings of non-experts were consistently higher than those of experts. However, rank order only rarely changed between experts and non-experts. For lower quality values, confidence intervals were significantly larger for non-experts than for experts. Experienced listeners set more than twice as many loops as non-experts, compared more often between codecs and listened to high quality codecs for a longer duration than non-experts
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.