Compared to traditional cameras, light field imaging can additionally record the direction of incoming light. Based on this, light fields can be represented in focus stacks that focus at different depths and all-in-focus images. The refocusing information of focal stacks can provide supplementary information for saliency detection. In this paper, we propose a novel multi-generator adversarial network for saliency detection that consists of a cascaded multi-generator and a discriminator. The multi-generator extract saliency features from allin-focus images and focal stacks. Besides, we set the predicted map multiplied by the all-in-focus image as the input of the discriminator. With this multiplication, we can preserve color and texture information of the salient objects, and reduce the computational cost. We conduct experiments on three public datasets. Compared with existing methods, our method achieves competitive results.