Although many assistive devices have been developed and utilized to aid in daily living, the most general assistive means for individuals with visual impairments are the walking cane and guide dogs. These assistive means are effective in assisting the user in navigating within an environment; however, the navigation space is limited to the proximal environment of the user. Thus, in this paper, we discuss a method to increase the range of accessibility to a remote environment through robotic embodiment that enables teleoperation and teleperception through multi-modal feedback. In order to transform remote spatial information into a nonvisual modality, we present a framework for utilizing an RGB-D-based depth camera, a mobile robot, and a haptic interface for 3D haptic rendering to accomplish the goal of haptic exploration of a remote environment. Experiments with three different control methods for robot interaction are designed for users with and without visual impairments. Several hypothesis are built to study the correlation between control/feedback modality and performance in telerobotic operations. Results show that users performed best when combining semi-autonomous navigation with 3D haptic exploration and also rated their experience with our system as fairly good.