The means of assisting visually impaired and blind (VIB) people when travelling usually relies on other people. Assistive devices have been developed to assist in blind navigation, but many technologies require users to purchase more devices and they lack flexibility, thus making it inconvenient for VIB users. In this research, we made use of a mobile phone with a depth camera function for obstacle avoidance and object recognition. It includes a mobile application that is controlled using simple voice and gesture controls to assist in navigation. The proposed system gathers depth values from 23 coordinate points that are analyzed to determine whether an obstacle is present in the head area, torso area, or ground area, or is a full body obstacle. In order to provide a reliable warning system, the research detects outdoor objects within a distance of 1.6 m. Subsequently, the object detection function includes a unique interactable feature that enables interaction with the user and the device in finding indoor objects by providing an audio and vibration feedback, and users were able to locate their desired objects more than 80% of the time. In conclusion, a flexible and portable system was developed using a depth camera-enabled mobile phone for use in obstacle detection without the need to purchase additional hardware devices.
There are 24.5 million visually impaired and blind (VIB) students who have limited access to educational materials due to cost or availability. Although advancement in technology is prevalent, providing individualized learning using technology remains a challenge without the proper tools or experience. The TacPic system was developed as an online platform to create tactile educational materials (TEM) based on the image inputs of users who do not have prior experience in tactile photo development or 3D printing. The TacPic system allows the users to simply upload images to a website and uses AI cloud computing on the Amazon Web Services platform. First, it segments and labels the images. Then, the text label is converted into braille words. Subsequently, surface rendering and consolidation of the image and text is performed, before it is converted into a single file that is ready for 3D printing. Currently, the types of TEM that can be created are tactile flashcards, tactile maps, and tactile peg puzzles, which can be developed within a few hours. This is in contrast to a development period of weeks using traditional methods. Furthermore, the tactile educational materials were tested by two VIB teachers and six VIB students. It was found that those who are congenitally blind need more time to identify the object and rely more on the braille labels compared to students who became blind at a later age. Teachers also suggested producing TEM that use simpler images, and TEM that are suitable for both sighted and VIB students. In conclusion, the researchers successfully developed a platform that allows more educators or parents to develop personalized and individualized TEM. In the future, further optimization of the algorithms to improve segmentation and the inclusion of other features, such as color, could be undertaken. Finally, new printing materials and methods are needed to improve printing efficiency.
Focus on the development of assistive devices for visually impaired and blind people (VIBs) to provide assistance in their safety and mobility has increased, but making such devices portable is still a challenge. We propose a system for localized obstacle avoidance with a haptic-based interface for VIBs implemented using a robotic operating system (ROS) to improve the obstacle detection of existing assistive devices. With a depth camera sensor, an obstacle localization algorithm was developed utilizing the ROS framework to identify key regions to detect head-level, left/right torso-level, and left/right ground-level obstacles. The proposed wearable device provides a discernible array of haptic feedback to convey the perceived locations of obstacles. The system was tested by blindfolded volunteers to determine the accuracy in determining object locations in various environments. Experimental results showed the consistency of the system across different setups. The obstacle detection algorithm was optimized and evaluated to discriminate noises and concurrently detect smaller obstacles, thus making detection more robust. Subsequently, the Eulerian video magnification method was used to determine the level of vibration isolation for a prototype.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.