A deep-learning algorithm is proposed for the inpainting of Bragg coherent diffraction imaging (BCDI) patterns affected by detector gaps. These regions of missing intensity can compromise the accuracy of reconstruction algorithms, inducing artefacts in the final result. It is thus desirable to restore the intensity in these regions in order to ensure more reliable reconstructions. The key aspect of the method lies in the choice of training the neural network with cropped sections of diffraction data and subsequently patching the predictions generated by the model along the gap, thus completing the full diffraction peak. This approach enables access to a greater amount of experimental data for training and offers the ability to average overlapping sections during patching. As a result, it produces robust and dependable predictions for experimental data arrays of any size. It is shown that the method is able to remove gap-induced artefacts on the reconstructed objects for both simulated and experimental data, which becomes essential in the case of high-resolution BCDI experiments.