Today the visualization of 360-degree videos has become a means to live immersive experiences.. However, an important challenge to overcome is how to guide the viewer’s attention to the video’s main scene, without interrupting the immersion experience and the narrative thread. To meet this challenge, we have developed a software prototype to assess three approaches: Arrows, Radar and Auto Focus. These are based on visual guidance cues used in first person shooter games such as: Radar-Sonar, Radar-Compass and Arrows. In the study a questionnaire was made to evaluate the comprehension of the narrative, the user’s perspective with respect to the design of the visual cues and the usability of the system. In addition, data was collected on the movement of the user’s head, in order to analyze the focus of attention. The study used statistical methods to perform the analysis, the results show that the participants who used some visual cue (any of these) showed significant improvements compared to the control group (without using visual cues) in finding the main scene. With respect to narrative compression, significant improvements were obtained in the user group that used Radar and Auto Focus compared to the control group.
In many countries, the number of elderly people has grown due to the increase in the life expectancy of the population, many of whom currently live alone and are prone to having accidents that they cannot report, especially if they are immobilized. For this reason, we have developed a non-intrusive IoT device, which, through multiple integrated sensors, collects information on habitual user behavior patterns and uses it to generate unusual behavior rules. These rules are used by our SecurHome system to send alert messages to the dependent person's family members or caregivers if their behavior changes abruptly over the course of their daily life. This document describes in detail the design and development of the SecurHome system.
People living with deafness or hearing impairment have limited access to information broadcast live on television. Live closed captioning is a currently active area of study; to our knowledge, there is no system developed thus far that produces high-quality captioning results without using scripts or human interaction. This paper presents a comparative analysis of the quality of captions generated for four Spanish news programs by two captioning systems: a semiautomatic system based on respeaking (system currently used by a Spanish TV station) and an automatic system without human interaction proposed and developed by the authors. The analysis is conducted by measuring and comparing the accuracy, latency and speed of the captions generated by both captioning systems. The captions generated by the system presented higher quality considering the accuracy in terms of Word Error Rate (WER between 3.76 and 7.29%) and latency of the captions (approximately 4 s) at an acceptable speed to access the information. We contribute a first study focused on the development and analysis of an automatic captioning system without human intervention with promising quality results. These results reinforce the importance of continuing to study these automatic systems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.