Biogas production from organic raw materials is a highly complex biotechnological process. The responsible anaerobic fermentation process is difficult to measure due to its multi-stage nature. Still, optimization of biogas production and the development of robust and efficient process management strategies require continually updated information about the process. Hence, the development of a comprehensive sensor system with high temporal resolution is key to further advancement in biogas technology. Here, we demonstrate a gas sensor based on cavity enhanced Raman spectroscopy for biogas monitoring. Online detection of all gas components of a biogas mixture enables a comprehensive quantification. In addition, robust calibration routines facilitate the adaptation of the sensor for biogas monitoring. A simulated concentration course of a typical fermentation process with defined gas mixtures consisting of CH, CO, N, O and H showed reliable results for all relevant biogas components for varying concentration ranges from ppm to 100 vol%. The response time of 5 seconds allows online detection and - as a consequence - real time information is obtained about the biogas composition. A laboratory biogas reactor was designed to operate biogas production on a miniaturized scale and analyze it using the Raman gas sensor. The developed sensor enables the observation of methane production throughout the first 24 h of the fermentation process. The obtained results show the suitability of cavity enhanced Raman spectroscopy as a gas sensor to monitor the entire process of biogas production. As this strategy would allow the process to be manipulated and optimized according to the current state, it is of great biotechnological interest.
The topic of motivation is a crucial issue for various human-robot interaction (HRI) scenarios. Interactional aspects of motivation can be studied in human-human interaction (HHI) and build the basis for modeling a robot's interactional conduct. Using an ethnographic approach we explored the factors relevant in the formation of motivationrelevant processes in an indoor-cycling activity. We propose an interactive, action-based motivation model for HRI that has been implemented in an autonomous robot system and tested during a long-term HRI study. The model is based on micro-analyses of human indoor cycling courses and resulted in an adaption of specific dialog patterns for HRI. A qualitative evaluation -accompanied by a quantitative analysisdemonstrated that the transfer of interaction patterns from HHI to HRI was successful with participants benefitting from the interaction experience (e.g., performance, subjective feeling of being motivated).
Autonomous navigation in unstructured environments like forest or country roads with dynamic objects remains a challenging task, particularly with respect to the perception of the environment using multiple different sensors.The problem has been addressed from both, the computer vision community as well as from researchers working with laser range finding technology, like the Velodyne HDL-64.Since cameras and LIDAR sensors complement one another in terms of color and depth perception, the fusion of both sensors is reasonable in order to provide color images with depth and reflectance information as well as 3D LIDAR point clouds with color information.In this paper we propose a method for sensor synchronization, especially designed for dynamic scenes, a low-level fusion of the data of both sensors and we provide a solution for the occlusion problem that arises in conjunction with different viewpoints of the fusioned sensors. IEEE Intelligent Vehicles Symposium
Learning and matching a user’s preference is an essential aspect of achieving a productive collaboration in long-term Human–Robot Interaction (HRI). However, there are different techniques on how to match the behavior of a robot to a user’s preference. The robot can be adaptable so that a user can change the robot’s behavior to one’s need, or the robot can be adaptive and autonomously tries to match its behavior to the user’s preference. Both types might decrease the gap between a user’s preference and the actual system behavior. However, the Level of Automation (LoA) of the robot is different between both methods. Either the user controls the interaction, or the robot is in control. We present a study on the effects of different LoAs of a Socially Assistive Robot (SAR) on a user’s evaluation of the system in an exercising scenario. We implemented an online preference learning system and a user-adaptable system. We conducted a between-subject design study (adaptable robot vs. adaptive robot) with 40 subjects and report our quantitative and qualitative results. The results show that users evaluate the adaptive robots as more competent, warm, and report a higher alliance. Moreover, this increased alliance is significantly mediated by the perceived competence of the system. This result provides empirical evidence for the relation between the LoA of a system, the user’s perceived competence of the system, and the perceived alliance with it. Additionally, we provide evidence for a proof-of-concept that the chosen preference learning method (i.e., Double Thompson Sampling (DTS)) is suitable for online HRI.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.