We designed a robot system that assisted in behavioral intervention programs of children with autism spectrum disorder (ASD). The eight-session intervention program was based on the discrete trial teaching protocol and focused on two basic social skills: eye contact and facial emotion recognition. The robotic interactions occurred in four modules: training element query, recognition of human activity, coping-mode selection, and follow-up action. Children with ASD who were between 4 and 7 years old and who had verbal IQ ≥ 60 were recruited and randomly assigned to the treatment group (TG, n = 8, 5.75 ± 0.89 years) or control group (CG, n = 7; 6.32 ± 1.23 years). The therapeutic robot facilitated the treatment intervention in the TG, and the human assistant facilitated the treatment intervention in the CG. The intervention procedures were identical in both groups. The primary outcome measures included parent-completed questionnaires, the Autism Diagnostic Observation Schedule (ADOS), and frequency of eye contact, which was measured with the partial interval recording method. After completing treatment, the eye contact percentages were significantly increased in both groups. For facial emotion recognition, the percentages of correct answers were increased in similar patterns in both groups compared to baseline (P > 0.05), with no difference between the TG and CG (P > 0.05). The subjects' ability to play, general behavioral and emotional symptoms were significantly diminished after treatment (p < 0.05). These results showed that the robot-facilitated and human-facilitated behavioral interventions had similar positive effects on eye contact and facial emotion recognition, which suggested that robots are useful mediators of social skills training for children with ASD. Autism Res 2017,. © 2017 International Society for Autism Research, Wiley Periodicals, Inc. Autism Res 2017, 10: 1306-1323. © 2017 International Society for Autism Research, Wiley Periodicals, Inc.
Background Caregivers of people with dementia find it extremely difficult to choose the best care method because of complex environments and the variable symptoms of dementia. To alleviate this care burden, interventions have been proposed that use computer- or web-based applications. For example, an automatic diagnosis of the condition can improve the well-being of both the person with dementia and the caregiver. Other interventions support the individual with dementia in living independently. Objective The aim of this study was to develop an ontology-based care knowledge management system for people with dementia that will provide caregivers with a care guide suited to the environment and to the individual patient’s symptoms. This should also enable knowledge sharing among caregivers. Methods To build the care knowledge model, we reviewed existing ontologies that contain concepts and knowledge descriptions relating to the care of those with dementia, and we considered dementia care manuals. The basic concepts of the care ontology were confirmed by experts in Korea. To infer the different care methods required for the individual dementia patient, the reasoning rules as defined in Semantic Web Rule Languages and Prolog were utilized. The accuracy of the care knowledge in the ontological model and the usability of the proposed system were evaluated by using the Pellet reasoner and OntOlogy Pitfall Scanner!, and a survey and interviews were conducted with caregivers working in care centers in Korea. Results The care knowledge model contains six top-level concepts: care knowledge, task, assessment, person, environment, and medical knowledge. Based on this ontological model of dementia care, caregivers at a dementia care facility in Korea were able to access the care knowledge easily through a graphical user interface. The evaluation by the care experts showed that the system contained accurate care knowledge and a level of assessment comparable to normal assessment tools. Conclusions In this study, we developed a care knowledge system that can provide caregivers with care guides suited to individuals with dementia. We anticipate that the system could reduce the workload of caregivers.
This paper describes a new method for indoor environment mapping and localization with stereo camera. For environmental modeling, we directly use the depth and color information in image pixels as visual features. Furthermore, only the depth and color information at horizontal centerline in image is used, where optical axis passes through. The usefulness of this method that we can easily build a measure between modeling and sensing data only on the horizontal centerline. That is because vertical working volume between model and sensing data can be changed according to robot motion. Therefore, we can build a map about indoor environment as compact and efficient representation. Also, based on such nodes and sensing data, we suggest a method for estimating mobile robot positioning with random sampling stochastic algorithm. With basic real experiments, we show that the proposed method can be an effective visual navigation algorithm. Index Terms -Vkwn-based navigation, stereo vision, direct method, map building, localization
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.