Stahovich joined the Mechanical Engineering Department at UC Riverside in 2003 where he is currently a Professor and Chair. His research interests include pen-based computing, educational technology, design automation, and design rationale management.
One challenge in building collaborative design tools that use speech and sketch input is distinguishing gesture pen strokes from those representing device structure, that is, object strokes. In previous work, we developed a gesture/object classifier that uses features computed from the pen strokes and the speech aligned with them. Experiments indicated that the speech features were the most important for distinguishing gestures, thus indicating the critical importance of the speech–sketch alignment. Consequently, we have developed a new alignment technique that employs a two-step process: the speech is first explicitly segmented (primarily into clauses), then the segments are aligned with the pen strokes. Our speech segmentation step is unique in that it uses sketch features for locating segment boundaries in multimodal dialog. In addition, it uses a single classifier to directly combine word-based, prosodic (pause), and sketch-based features. In the second step, segments are initially aligned with strokes based on temporal correlation, and then classifiers are used to detect and correct two common alignment errors. Our two-step technique has proven to be substantially more accurate at alignment than the existing technique that lacked explicit segmentation. It is more important that, for nearly all cases, our new technique results in greater gesture classification accuracy than the existing technique, and performed nearly as well as the benchmark manual speech–sketch alignment.
Research has demonstrated that self-explanation hones student's metacognitive skills and increases their performance. We have found however, that not all self-explanation is substantive. Our goal is to develop computational techniques capable of determining whether a student's explanation is relevant or not. This will then enable us, for example, to create an interactive tutoring system capable of prompting students to continue their explanations when necessary. This is a tractable task as self-explanations typically contain a small number of possible concepts. The language used to express these concepts can vary greatly, but our task is only to identify the existence of the concepts, not to perform general machine interpretation. In this paper, we present early work on the automatic understanding of students' handwritten self-explanation of their solutions to homework problems in an engineering statics course. We employ an open information extraction technique popularly used to identify relations present in broadcast news transcripts. In our study, this technique achieved up to 97% accuracy at identifying when the content of a student's self-explanation did not match the concepts used by experts in explaining their own work on the same problem.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.