This study is aimed at uncovering a way that participants in conversation predict end-of-utterance for spontaneous Japanese speech. In spontaneous everyday conversation, the participants must predict the ends of utterances of a speaker to perform smooth turn-taking without too much gap. We consider that they utilize not only syntactic factors but also prosodic factors for the end-of-utterance prediction because of the difficulty of prediction of a syntactic completion point in spontaneous Japanese. In previous studies, we found that prosodic features changed significantly in the final accentual phrase. However, it is not clear what prosodic features support the prediction. In this paper, we focused on dependency structure among bunsetsuphrases as the syntactic factor, and investigated the relation between the phrase-dependency and prosodic features. The results showed that the average fundamental frequency and the average intensity for accentual phrases did not decline until the modified phrase appeared. Next, to predict the end of utterance from the syntactic and prosodic features, we constructed a generalized linear mixed model. The model provided higher accuracy than using the prosodic features only. These suggest the possibility that prosodic changes and phrase-dependency relations inform the hearer that the utterance is approaching its end.
It has become remarkably common recently for people to own multiple mobile devices, although it is still difficult to effectively use them in combination. In this study, we constructed a new system called VISTouch that achieves a new operational capability and increases user interest in mobile devices by enabling multiple devices to be used in combination dynamically and spatially. Using VISTouch, for example, to spatially connect a smart-phone to a horizontally positioned tablet that is displaying a map as viewed from above enables these devices to dynamically obtain the correct relative position. The smart-phone displays images viewed from its position, direction, and angle in real time as a window to show the virtual 3D space. We applied VISTouch to two applications that used detailed information of the relative position in real space between multiple devices. These applications showed the potential improvement in using multiple devices in combination.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.