There are many visually impaired people living their lives throughout the world. As of now, the only realistic tools available for these people are white sticks and guide dogs. However, these visual aids are insufficient for handling unexpected obstacles in various environments. Accidents tend to occur frequently, and these aids are very costly, thus limiting their usefulness for the visually impaired. This paper aims to develop a visual aid cognitive system and explores the possibility of practical performance. The overall system configuration uses the Raspberry Pi and OpenCV in real-time to capture braille block images and recognize the braille blocks' pattern. It also gathers information about the surrounding environment before conveying the information to the user. In the first phase, the Go/Stop signal is determined by the linear or circular braille block according to the type of braille block. In the second phase, the visual aid system equipped with an ultrasonic sensor warns the user when encountering an obstacle so that the user can walk safely. We implemented a text recognition system based on a Text-to-Speech converter that transmits the text recognition information to the visually impaired and tested TTS function to read various street signs with various fonts. We lastly discuss the future possibility for our auxiliary system in various road environments and how it can be improved to help visually impaired people.