For self-driving vehicles, detecting lane lines in changeable scenarios is a fundamental yet challenging task. The rise of deep learning in recent years has contributed to the thriving of autonomous driving. However, existing methods of lane detection based on deep learning have high requirements on computing environment, so their applicability is further restricted. This paper proposed an improved attention deep neural network (DNN), a lightweight semantic segmentation architecture catering for efficient computation in low memory, which contains two branches worked in different resolution. The proposed network integrates fine details captured by local interaction of pixels at high resolution into global contexts at low resolution, computing dense feature maps for prediction task. Based on the attributes of disparate feature resolution characteristics, different attention mechanisms are adopted to guide the network to effectively exploit the model parameters. The proposed network achieves comparable results with state-of-the-art methods on two popular lane detection benchmarks (TuSimple and CULane), with faster calculation efficiency at 259 frames-per-second (FPS) on CULane dataset, and the total number of model parameters only requires 1.57 M. This study provides a practical and meaningful reference for the application of lane detection in memory constrained devices.