As the population of modern society ages, accidents involving the elderly are increasing and there is a shortage of caregivers.
Care robots are currently being developed, but they are not yet widely used.
One of the reasons for this is that complex systems with numerous sensors are costly and require substantial maintenance.
Another reason is that it is difficult for people to understand complex automatic systems, and physical contact with such systems can cause a sense of unease.
For this reason, the authors have been researching a user state estimation method for care robots using a small number of sensors and an information presentation method that generates a sense of ease.
The center of gravity (CoG) is an effective indicator of the state of the human body, but it generally requires many sensors to accurately determine.
To solve this problem, we developed a method for calculating candidates for the CoG using a small number of sensors.
In our previous studies, we validated the experimental method via off-line state estimation simulations on the measured data when the robot was moved without any state estimation.
However, this technique was insufficient for validating real-time state estimation with an actual robot and for operating the robot based on the estimation.
In addition, the care robot developed in previous studies was equipped with an interface to reduce the user's sense of unease, and it presented the estimated user status and timing of user movements on the screen and described them with audio.
However, the system was limited to presenting information while providing standing support only, and did not communicate with the user while they were walking, sitting, or oriented abnormally.
In this study, we prototyped an interface that presents information on all user movements, such as standing, walking, sitting, and abnormal states, and that links those movements to the care robot's response to the estimated user state.
The effectiveness of the interface was validated through experiments using an actual robot with several participants.
In all cases, the robot estimated the user's state, raised and lowered the armrest to allow the user to stand up and sit down, and stopped its motion and reverted to its original state when it detected an anomaly.
The effectiveness of the interface was also confirmed by participant interviews.
These results confirmed the effectiveness of the proposed method in estimating the user's state, providing support based on that state, and presenting information through the interface with a sense of ease.