Depth information from still 2D images plays an important role in automated driving, driving safety, and robotics. Monocular depth estimation is considered as an ill-posed and inherently ambiguous problem in general, and a tight issue is how to obtain global information efficiently since pure convolutional neural networks (CNNs) merely extract the local information. To end that, some previous works utilized conditional random fields (CRFs) to obtain the global information, but it is notoriously difficult to optimize. In this paper, a novel hybrid neural network is proposed to solve that, and concurrently a dense depth map is predicted from the monocular still image. Specifically: first, the deep residual network is utilized to obtain multi-scale local information and then feature correlation (FCL) blocks are used to correlate these features. Finally, the feature selection attention-based mechanism is adopted to fuse the multi-layer features, and the multi-layer recurrent neural networks (RNNs) are utilized with bidirectional long short-term memory (Bi-LSTM) unit as the output layer. Furthermore, a novel logarithm exponential average error (LEAE) is proposed to overcome over-weighted problem. The multi-scale feature correlation network (MFCN) is evaluated on large-scale KITTI benchmarks (LKT), which is a subset of KITTI raw dataset, and NYU depth v2. The experiments indicate that the proposed unified network outperforms existing methods. This method also updates the state-of-the-art performance on LKT datasets. Importantly, the depth estimation method can be widely used for collision risk assessment and avoidance in driving assistance systems or automated pilot systems to achieve safety in a more economical and convenient way.