Direct volume rendering is a widely used technique for extracting information from threedimensional scalar fields acquired by measurement or numerical simulation. However, the translucency of direct volume rendering to express the internal structure of the volume often makes it difficult to recognize the depth of complex structures. In this paper, we propose a new method for applying depth-of-field effects to volume ray-casting to improve the depth perception. A thin lens camera model is used to simulate rays passing through different parts of lens. The proposed method is implemented in the GPU pipeline with no preprocessing, so any acceleration techniques of volume ray-casting can be applied without restrictions. We also propose a multi-pass rendering framework using progressive lens sampling. This new technique uses a different number of lens samples per pixel, depending on the size of the circle of confusion at the point where each ray intersects the volume data. In the experiments with various data, we demonstrated that higher quality images with better depth perception were generated up to 9x faster than the existing depth-of-field method in direct volume rendering. INDEX TERMS Computer graphics, computers and information processing, image generation, imaging, depth of field effect, direct volume rendering, ray casting, visualization.