Digital holography records the entire wavefront of an object, including amplitude and phase. To reconstruct the object numerically, we can backpropagate the hologram with Fresnel-Kirchhoff integralbased algorithms such as the angular spectrum method and the convolution method. Although effective, these techniques require prior knowledge, such as the object distance, the incident angle between the two beams, and the source wavelength. Undesirable zero-order and twin images have to be removed by an additional filtering operation, which is usually manual and consumes more time in off-axis configuration. In addition, for phase imaging, the phase aberration has to be compensated, and subsequently an unwrapping step is needed to recover the true object thickness. The former either requires additional hardware or strong assumptions, whereas the phase unwrapping algorithms are often sensitive to noise and distortion. Furthermore, for a multisectional object, an all-in-focus image and depth map are desired for many applications, but current approaches tend to be computationally demanding. We propose an end-to-end deep learning framework, called a holographic reconstruction network, to tackle these holographic reconstruction problems. Through this data-driven approach, we show that it is possible to reconstruct a noise-free image that does not require any prior knowledge and can handle phase imaging as well as depth map generation.