This review highlights the biomechanical foundations of braille and tactile graphic discrimination within the context of design innovations in information access for the blind and low vision community. Braille discrimination is a complex and poorly understood process that necessitates the coordination of motor control, mechanotransduction, and cognitive linguistic processing. Despite substantial technological advances and multiple design attempts over the last fifty years, a low-cost, high-fidelity refreshable braille and tactile graphics display has yet to be delivered. Consequently, the blind and low vision community are left with limited options for information access. This is amplified by the rapid adoption of graphical user interfaces for human computer interaction, a move that the blind and low vision community were effectively excluded from. Text-to-speech screen readers lack the ability to convey the nuances necessary for science, technology, engineering, arts, and math education and offer limited privacy for the user. Printed braille and tactile graphics are effective modalities but are time and resource intensive, difficult to access, and lack real time rendering. Single- and multi-line refreshable braille devices either lack functionality or are extremely cost prohibitive. Early computational models of mechanotransduction through complex digital skin tissue and the kinematics of the braille reading finger are explored as insight into device design specifications. A use-centered, convergence approach for future designs is discussed in which the design space is defined by both the end-user requirements and the available technology.