Many intelligent transportation systems, such as advanced driver assistance systems (ADAS), have been being developed to increase traffic safety. ADAS have the potential to save lives and reduce crashes by eliminating the human error in the driving process. The Lane detection system is designed to identify and estimate the position of lane boundaries in front of the ego-vehicle. Thus, it serves as a crucial and fundamental component in various ADAS, for instance, lane keeping assistance and lane departure warning assistance systems. In this paper, a proposed method of visionbased lane detection in urban roads is presented. First, we define a region of interest to exclude the misleading parts of the road image, then a bird's-eye view of the road in front of the ego-vehicle is obtained by employing the inverse perspective mapping. Second, we utilize the distinct colors of lane markings to achieve robust lane markings candidate detection. Finally, the estimated lane boundaries are represented by quadratic models whose parameters are estimated from the detected lane pixels using the RANSAC algorithm. Furthermore, we present a thorough evaluation of the detection performance of the proposed method using the ground-truth data of the Caltech dataset and a comparative analysis between the quadratic model used in the proposed method and other models presented in the literature. Detection results show the effectiveness of the proposed method in detecting lane boundaries in different conditions in urban roads, including curved lanes, shadows, illumination variations and presence of street writings. Moreover, the overall process takes an average time of 30.63 milliseconds per frame.