Visual perception involves continuously choosing the most prominent inputs while suppressing others. Neuroscientists induce visual competitions in various ways to study why and how the brain makes choices of what to perceive. Recently deep neural networks (DNNs) have been used as models of the ventral stream of the visual system, due to similarities in both accuracy and hierarchy of feature representation. In this study we created non-dynamic visual competitions for humans by briefly presenting mixtures of two images. We then tested feed-forward DNNs with similar mixtures and examined their behavior. We found that both humans and DNNs tend to perceive only one image when presented with a mixture of two. We revealed image parameters which predict this perceptual dominance and compared their predictability for the two visual systems. Our findings can be used to both improve DNNs as models, as well as potentially improve their performance by imitating biological behaviors.
In this study we report on a field test where we asked if it is feasible to deliver a scalable, commercial-grade solution for brain-based authentication currently given available head wearables. Sixty-two (62) participants living across the United States in autumn 2020 completed four (4) at-home sessions over a single (1) week. In each session there were six (6) authentication events consisting of rapid presentation of images (10Hz) that participants watched for 10 seconds while recording their brain signal with an off-the-shelf brain signal measuring headband. The non-stationary nature of the brain signal, and the fact that the signal results from a superposition of hundreds of simultaneous processes in the brain that respond to context makes the data unique in time, unrepeatable, and unpredictable. Even when a participant watched identical stimuli, we find no two periods of time to be alike (Fig. 4B) and furthermore, no two combinations of time periods are alike. Differences within people (intra-) and across people (inter- participant) from session to session were found to be significant, however stable processes do appear to be underlying the signal complexity and non-stationarity. We show a simplified brain-based authentication system that captures distinguishable information with reliable, commercial-grade performance from participants at their own homes. We conclude that noninvasively measured brain signals are an ideal candidate for biometric authentication, especially for head wearables such as headphones and AR/VR devices.
The goal of this study was to investigate the effect of audio listened to through headphones on subjectively reported human focus levels, and to identify through objective measures the properties that contribute most to increasing and decreasing focus in people within their regular, everyday environment. Participants (N = 62, 18–65 years) performed various tasks on a tablet computer while listening to either no audio (silence), popular audio playlists designed to increase focus (pre-recorded music arranged in a particular sequence of songs), or engineered soundscapes that were personalized to individual listeners (digital audio composed in real-time based on input parameters such as heart rate, time of day, location, etc.). Audio stimuli were delivered to participants through headphones while their brain signals were simultaneously recorded by a portable electroencephalography headband. Participants completed four 1-h long sessions at home during which different audio played continuously in the background. Using brain-computer interface technology for brain decoding and based on an individual’s self-report of their focus, we obtained individual focus levels over time and used this data to analyze the effects of various properties of the sounds contained in the audio content. We found that while participants were working, personalized soundscapes increased their focus significantly above silence (p = 0.008), while music playlists did not have a significant effect. For the young adult demographic (18–36 years), all audio tested was significantly better than silence at producing focus (p = 0.001–0.009). Personalized soundscapes increased focus the most relative to silence, but playlists of pre-recorded songs also increased focus significantly during specific time intervals. Ultimately we found it is possible to accurately predict human focus levels a priori based on physical properties of audio content. We then applied this finding to compare between music genres and revealed that classical music, engineered soundscapes, and natural sounds were the best genres for increasing focus, while pop and hip-hop were the worst. These insights can enable human and artificial intelligence composers to produce increases or decreases in listener focus with high temporal (millisecond) precision. Future research will include real-time adaptation of audio for other functional objectives beyond affecting focus, such as affecting listener enjoyment, drowsiness, stress and memory.
The goal of this study was to learn what properties of sound affect human focus the most. Participants (N=62, 18-65y) performed various tasks while listening to either no background sound (silence), popular music playlists for increasing focus (pre-recorded songs), or personalized soundscapes (audio composed in real-time to increase a specific individual's focus). While performing tasks on a tablet, participants wore headphones and brain signals were recorded using a portable electroencephalography headband. Participants completed four one-hour long sessions, each with different audio content, at home. We successfully generated brain-based models to predict individual participant focus levels over time and used these models to analyze the effects of various audio content during different tasks. We found that while participants were working, personalized soundscapes increased their focus significantly above silence (p=0.008), while music playlists did not have a significant effect. For the young adult demographic (18-36y), silence was significantly less effective at producing focus than audio content of any type tested (p=0.001-0.009). Personalized soundscapes enhanced focus the most relative to silence, but professionally crafted playlists of pre-recorded songs also increased focus during specific time intervals, especially for the youngest audience demographic. We also found that focus levels can be predicted from physical properties of sound, enabling human and artificial intelligence composers to test and refine audio to produce increases or decreases in listener focus with high temporal (millisecond) precision. Future research includes real-time adjustment of sound for other functional objectives, such as affecting listener enjoyment, calm, or memory.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.