Individuals with visual impairments frequently confront substantial difficulties in interacting with their environment, a problem that is often exacerbated by the cost and accessibility of existing assistive technologies. This study introduces a prototype for a costeffective and accessible assistive device that employs deep learning techniques for object recognition. The proposed system utilizes the YOLO-V7 model, a deep learning algorithm trained on a comprehensive dataset encompassing various everyday objects, including US dollar denominations. In conjunction with two transfer learning-based cascade models, the system offers detection across 86 object categories. Upon object identification, the name of the item is converted into a Braille-readable format using the Python Braille library. Comprehensive experiments and analyses were undertaken to assess the efficacy of the proposed system. The results corroborate the system's effectiveness in achieving its intended purpose, demonstrating its potential to significantly aid visually impaired individuals in recognizing and interacting with objects in their environment. With a processing and Braille code generation time of 188.5 ms per frame, the model achieved recall, precision, and mAP scores of 0.81, 0.92, and 0.96, respectively. The integration of deep learning technology with high-performance platform boards has facilitated the development of a promising solution to the challenges faced by visually impaired individuals in environmental interaction. Overall, the proposed prototype represents an accessible and cost-effective assistive device, potentially revolutionizing the manner in which visually impaired individuals interact with their surroundings.