Retinal prostheses aim to restore visual perception in patients blinded by photoreceptor degeneration, by stimulating surviving retinal ganglion cells (RGCs), causing them to send artificial visual signals to the brain. Present-day devices produce limited vision, in part due to indiscriminate and simultaneous activation of many RGCs of different types that normally signal asynchronously. To improve artificial vision, we propose a closed-loop, cellular-resolution device that automatically identifies the types and properties of nearby RGCs, calibrates its stimulation to produce a dictionary of achievable RGC activity patterns, and then uses this dictionary to optimize stimulation patterns based on the incoming visual image. To test this concept, we use a high-density multi-electrode array as a lab prototype, and deliver a rapid sequence of electrical stimuli from the dictionary, progressively assembling a visual image within the visual integration time of the brain. Greedily minimizing the error between the visual stimulus and a linear reconstruction (as a surrogate for perception) yields a realtime algorithm with an efficiency of 96% relative to optimum. This framework also provides insights for developing efficient hardware. For example, using only the most effective 50% of electrodes minimally affects performance, suggesting that an adaptive device configured using measured properties of the patient's retina may permit efficiency with accuracy.