This paper presents methods for collecting and analyzing physiological data during real world driving tasks to determine a driver's relative stress level. Electrocardiogram, electromyogram, skin conductance and respiration were recorded continuously while drivers followed a set route through open roads in the greater Boston area. Data from twenty-four drives of at least fifty minute duration were collected f or analysis. In Analysis I features from five minute intervals of data were used to distinguish three levels of driver stress with an accuracy of over 97% across multiple drivers and driving days. In Analysis II, continuous physiological features were correlated with a continuous metric of observable stressors showing that on a real-time basis metrics of skin conductivity and heart rate were most closely correlated with driver stress level. Such automatically calculated physiological features could be used to help manage non-critical in-vehicle information systems and improve the driving experience.
AbstractÐThe ability to recognize emotion is one of the hallmarks of emotional intelligence, an aspect of human intelligence that has been argued to be even more important than mathematical and verbal intelligences. This paper proposes that machine intelligence needs to include emotional intelligence and demonstrates results toward this goal: developing a machine's ability to recognize human affective state given four physiological signals. We describe difficult issues unique to obtaining reliable affective data and collect a large set of data from a subject trying to elicit and experience each of eight emotional states, daily, over multiple weeks. This paper presents and compares multiple algorithms for feature-based recognition of emotional state from this data. We analyze four physiological signals that exhibit problematic day-to-day variations: The features of different emotions on the same day tend to cluster more tightly than do the features of the same emotion on different days. To handle the daily variations, we propose new features and algorithms and compare their performance. We find that the technique of seeding a Fisher Projection with the results of Sequential Floating Forward Search improves the performance of the Fisher Projection and provides the highest recognition rates reported to date for classification of affect from physiology: 81 percent recognition accuracy on eight classes of emotion, including neutral.
Abstract. We study activity recognition using 104 hours of annotated data collected from a person living in an instrumented home. The home contained over 900 sensor inputs, including wired reed switches, current and water flow inputs, object and person motion detectors, and RFID tags. Our aim was to compare different sensor modalities on data that approached "real world" conditions, where the subject and annotator were unaffiliated with the authors. We found that 10 infra-red motion detectors outperformed the other sensors on many of the activities studied, especially those that were typically performed in the same location. However, several activities, in particular "eating" and "reading" were difficult to detect, and we lacked data to study many fine-grained activities. We characterize a number of issues important for designing activity detection systems that may not have been as evident in prior work when data was collected under more controlled conditions.
Wearable computing moves computation from the desktop to the user. We are forming a community of networked, wearable-computer users to explore, over a long period, the augmented realities that these systems can provide. By adapting its behavior to the user's changing environment, a body-worn computer can assist the user more intelligently, consistently, and continuously than a desktop system. A text-based augmented reality, the Remembrance Agent, is presented to illustrate this approach. Video cameras are used both to warp the visual input (mediated reality) and to sense the user's world for graphical overlay. With a camera, the computer could track the user's finger to act as the system's mouse; perform face recognition; and detect passive objects to overlay 2.5D and 3D graphics onto the real world. Additional apparatus such as audio systems, infrared beacons for sensing location, and biosensors for learning about the wearer's affect are described. With the use of input from these interface devices and sensors, a long-term goal of this project is to model the user's actions, anticipate his or her needs, and perform a seamless interaction between the virtual and physical environments.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.