This paper discusses a method for supporting blackboard based lectures. In this method, students watch the video of the blackboard based lecture on a tablet computer. Some parts of the blackboards are recorded by two or more cameras, and a player is designed to enable the students to view and listen to any portion of the lecture. The videos in our method must be high resolution and high quality so that students are able to identify the characters written on the blackboard. However, when many students receive the video by wireless LAN, the bandwidth available for each student decreases. We attempt to maintain the image quality of the video by decreasing the frame rate. After viewing the videos, the tested students have completed questionnaires to evaluate these videos.
From the side of upper-level applications which require planning the actions in robot or those which need to search the whole log of activities in smart home, the action predicate expressions in the form of knowledge graphs may play an important role. The sequence of activities alone, which can be supplied by the conventional activity recognition systems, may not be sufficient for those applications. The subject of the particular activity is crucial information in most of the cases, and the object of the particular activity is often necessary to identify the characteristics. From this perspective, we have investigated the activities recognized by activity recognition systems, trying to identify their hidden elements which play the role of the subject and the object of the activities, i.e. activity knowledge graph. If we focus on these hidden elements, they are categorized in two: (1) person (subject) -person (object) interaction, and (2) person (subject) -object (object) interactions. Depending on the class of activities, these two are sometimes faced great difficulties: the hidden elements for walk, pick-up, open, and drink are quite easy but those for look-at, see, watch, and throw are difficult. The source of difficulties arises from the fact that the object (object) is not contacted from the person (subject). In this paper we have developed a method which identifies non-contacted object by the direction of the eye gaze of the person (subject) in the category of watch (activity). Using "Watching TV" data by Stair lab, the proposed system achieved 85% in accuracy.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.