Plenoptic imaging offers not only two-dimensional projections but also adds light array directions, thus supporting single-shot all-in-focus imaging. Its poor spatial resolution becomes an obstacle to high-quality all-in-focus imaging performance. Although various super-resolution (SR) methods have been designed and combined with multifocus image fusion (MFIF), high-quality multifocus fused SR images can be reconstructed for various applications, almost all of them deal with MFIF and SR separately. To our best knowledge, we first unify MFIF and SR problems as the multifocus image SR fusion (MFISRF) in the optical perspective and thus propose a dataset-free unsupervised framework named deep fusion prior (DFP) to address such MFISRF, particularly for plenoptic SR all-in-focus imaging. Both numerical and practical experiments have proved that our proposed DFP approaches or even outperforms those state-of-the-art MFIF and SR method combinations. Therefore, we believe DFP can be potentially used in various computational photography applications. The DFP codes are open source and available at http://github.com/GuYuanjie/DeepFusionPrior.