Lane detection based on semantic segmentation can achieve high accuracy, but, in recent years, it does not have a mobile-friendly cost, which is caused by the complex iteration and costly convolutions in convolutional neural networks (CNNs) and state-of-the-art (SOTA) models based on CNNs, such as spatial CNNs (SCNNs). Although the SCNN has shown its capacity to capture the spatial relationships of pixels across rows and columns of an image, the computational cost and memory requirement needed cannot be afforded with mobile lane detection. Inspired by the channel attention and self-attention machine, we propose an integrated coordinate attention (ICA) module to capture the spatial-wise relationships of pixels. Furthermore, due to the lack of enhancement in the channel dimension, we created an efficient network with a channel-enhanced coordinate attention block named CCA, composed of ICA and other channel attention modules, for all-dimension feature enhancement. As a result, by replacing many repeated or iterative convolutions with the attention mechanism, CCA reduces the computational complexity. Thus, our method achieves a balance of accuracy and speed and has better performance on two lane datasets—TuSimple and ILane. At less than a few tenths of the computational cost, our CCA achieves superior accuracy compared to the SCNN. These results show that the low cost and great performance of our design enable the use of the lane detection task in autopilot scenarios.