Development of an artificial retina model that can mimic the biologic retina is a highly challenging task and this task is an important step in the development of a visual prosthesis. The receptive field structure of the retina layer is usually modeled as a 2D difference of Gaussian (DOG) filter profile. In the present study, as a different approach, a retina model including a 3D 2-stage DOG filter (3D-ADOG) that has an adaptively changing bandwidth with respect to the local image statistic is developed. Using this modeling, the adaptive image processing of the retina can be realized. The contribution of the developed model in terms of the image quality is evaluated via simulation studies using test images. The first simulation results, including only the spike count-based reconstruction for a test video sequence, were previously published. In this study, in addition to the spike count-based reconstruction, the interspike interval measure is also used in the simulation study. The reconstruction results are compared using the statistical parameters of the mean squared error (MSE), universal quality index (UQI), and histogram similarity ratio (HSR), which characterize the image likelihood. To evaluate the performance of the model versus time, time-dependent changes in the MSE, HSR, and UQI parameters are obtained and compared to the standard model. From these results, it is concluded that the 3D-ADOG filter-based retina model preserves the spatial details of the image and produces a larger number of different gray tone levels, which are important for the visual perception of an image, in comparison with the well-known classical DOG filter-based retina model. The retina implant systems based on this model can provide better visual perception for implant recipients.