Inverse problems exist in many domains such as phase imaging, image processing, and computer vision. These problems are often solved with application-specific algorithms, even though their nature remains the same: mapping input image(s) to output image(s). Deep convolutional neural networks have shown great potential for highly variable tasks across many image-based domains, but are usually difficult to train due to their inner high non-linearities. We propose a novel neural network architecture highlighting fast convergence as a generic solution addressing image(s)-to-image(s) inverse problems of different domains. Here we show that this approach is effective at predicting phases from direct intensity measurements, imaging objects from diffused reflections and denoising scanning transmission electron microscopy images, with just different training datasets. This opens a way to solve problems statistically through big data, in contrast to implementing explicit inversion algorithms from their mathematical formulas. Previous works have targeted much more on how can we reconstruct rather than what can be reconstructed. Our strategy offers a paradigm shift.