Since culturally salient stimuli for emotion recognition are scarce in India, we developed and validated a set of 140 coloured pictures of six basic emotions along with a neutral expression. The expressions were posed by four expressers, two males and two females, with mean age of 25.25 (SD 3.77) years. The expressions were captured from five different angles keeping the background uniform. These pictures were shown to 350 undergraduates who labelled the emotion and rated their intensity. The mean biased hit rate was 93.02 (SD 7.33) and mean unbiased hit rate was .519 (SD .015). Within subjects ANOVA revealed significant main effect of emotion (F(1, 6) = 7.598, p < .001). The t-test value (23.116, p < .001) shows that the given emotion was identified correctly by participants beyond chance factors. The mean intensity rating was 5.94 (SD .77). Overall, the results reveal that the pictures are a valid set of affective stimuli.
Up to 15% of the Indian school-going children suffer from dyslexia. This paper aims to determine the extent to which existing knowledge about the eye-tracking based human-computer interface can be used to assist these children in their reading and writing activities. A virtual keyboard system with multimodal feedback is proposed and designed for a lexically and structurally complex language and optimized for multimodal feedback involving several portable, non-invasive, and low-cost input devices: a touch screen, an eye-tracker, and a soft-switch. The performance was evaluated in terms of text-entry rate, information transfer rate, and type of errors with three different experimental conditions: 1) touch-screen condition with auditory feedback 2) eye-tracking condition with auditory and visual feedback, and 3) eye-tracking and soft-switch condition with auditory and visual feedback. The proposed multimodal feedback has shown a significant improvement in text-entry rate with less error. This work represents the first virtual keyboard with multimodal feedback for dyslexic children in the Hindi language, which can be extended to other languages.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.