Recent advancements in optical and electronic sensor technologies, coupled with the proliferation of computing devices (such as GPUs), have enabled real-time autonomous driving systems to become a reality. Hence, research in algorithmic advancements for advanced driver assistance systems (ADASs) is rapidly expanding, with a primary focus on enhancing robust lane detection capabilities to ensure safe navigation. Given the widespread adoption of cameras on the market, lane detection relies heavily on image data. Recently, CNN-based methods have attracted attention due to their effective performance in lane detection tasks. However, with the expansion of the global market, the endeavor to achieve reliable lane detection has encountered challenges presented by diverse environmental conditions and road scenarios. This paper presents an approach that focuses on detecting lanes in road areas traversed by vehicles equipped with cameras. In the proposed method, a U-Net based framework is employed for training, and additional lane-related information is integrated into a four-channel input data format that considers lane characteristics. The fourth channel serves as the edge attention map (E-attention map), helping the modules achieve more specialized learning regarding the lane. Additionally, the proposition of an approach to assign weights to the loss function during training enhances the stability and speed of the learning process, enabling robust lane detection. Through ablation experiments, the optimization of each parameter and the efficiency of the proposed method are demonstrated. Also, the comparative analysis with existing CNN-based lane detection algorithms shows that the proposed training method demonstrates superior performance.