In this demonstration, we present a closed-loop feedback system for evaluating and improving the human factors performance of a lighting system based on tunable LED technology. We investigate the ways in which closed-loop feedback can enhance the ability for lighting to automatically respond to changes in the user's ongoing activities. A sensing platform uses multimodal wireless sensors and computer vision to detect an individual's presence and uses computational reasoning to make inferences about his or her activities. A "recognition engine" provides access to the inferred activities, which the LED system uses to make contextually relevant lighting changes according to the various operational states within the space. A human factors experiment makes use of a mobile phone based context-aware experience sampling application that responds to changes in the activities, delivering questions to the user to help improve the activity classifier and refine the lighting application. During the demonstration, participants will experience lighting changes automatically applied to a workspace in order to fulfill the visual requirements of the detected activities and to maximize energy savings.
INTRODUCTIONWithin the human factors and ergonomics community, there is an increasing emphasis on developing technologies and environments that not only accommodate a majority of users on a population scale, but also can be tailored to the singular demands of individuals engaging in specific activities. A key challenge to the implementation of so-called "contextaware" applications is the lack of access to reliable data that characterizes the user's behavioral context, as well as uncertainty about the optimal ways in which context-aware applications might take advantage of this information.We present a system that addresses these barriers in the implementation of an automated lighting control application. We will demonstrate the use of closed-loop feedback to enhance the performance of a sensor-enabled activity recognition system that drives an automated lighting system in an officestyle workspace. As a user engages in typical workplace activities, he or she activates sensors installed on objects in the environment.User activities are demonstrated implicitly as the individual interacts with objects or performs behaviors that are recognized by a computer vision system. Inferences about user behavior are generated by a multi-classifier engine running on a computer server embedded in a piece of office furniture. Explicit feedback about the user's reaction to the system's performance is elicited by a handheld computing application (i.e., a mobile phone "app") designed to prompt the user at moments when the system has detected a change in behavior.Responses to targeted questions are used in a reinforcement-learning paradigm to increase or decrease the likelihood that the system will consider the activation patterns to be representative of a specific behavior in the future. When thresholds of sufficient confidence are reached, the tunable LED...