Hospitals are overwhelmingly filled with sounds produced by alarms and patient monitoring devices. Consequently, these sounds create a fatiguing and stressful environment for both patients and clinicians. As an attempt to attenuate the auditory sen- sory overload, we propose the use of a multimodal alarm system in operating rooms and intensive care units. Specifically, the system would utilize multisensory integration of the haptic and auditory channels. We hypothesize that combining these two channels in a synchronized fashion, the auditory threshold of perception of participants will be lowered, thus allowing for an overall reduction of volume in hospitals. The results obtained from pilot testing support this hypothesis. We conclude that further investigation of this method can prove useful in reducing the sound exposure level in hospitals as well as personalizing the perception and type of the alarm for clinicians.
The standard formulation of Reinforcement Learning lacks a practical way of specifying what are admissible and forbidden behaviors. Most often, practitioners go about the task of behavior specification by manually engineering the reward function, a counter-intuitive process that requires several iterations and is prone to reward hacking by the agent. In this work, we argue that constrained RL, which has almost exclusively been used for safe RL, also has the potential to significantly reduce the amount of work spent for reward specification in applied Reinforcement Learning projects. To this end, we propose to specify behavioral preferences in the CMDP framework and to use Lagrangian methods, which seek to solve a min-max problem between the agent's policy and the Lagrangian multipliers, to automatically weigh each of the behavioral constraints. Specifically, we investigate how CMDPs can be adapted in order to solve goal-based tasks while adhering to a set of behavioral constraints and propose modifications to the SAC-Lagrangian algorithm to handle the challenging case of several constraints. We evaluate this framework on a set of continuous control tasks relevant to the application of Reinforcement Learning for NPC design in video games.
The number of visually impaired or blind (VIB) people in the world is estimated at several hundred million [4]. Based on a series of interviews with the VIB and developers of assistive technology, this paper provides a survey of machine-learning based mobile applications and identifies the most relevant applications. We discuss the functionality of these apps, how they align with the needs and requirements of the VIB users, and how they can be improved with techniques such as federated learning and model compression. As a result of this study we identify promising future directions of research in mobile perception, micro-navigation, and contentsummarization.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.