Some roadside objects pose a significant danger to pedestrians and vehicles when they are too close to the road. A few examples are trees, poles and fences. Their proximity to the road can change over time due to natural conditions or human activities. Early detection of severe roadside conditions can help avoid accidents and save lives. However, detecting the roadside severity objects requires many resources and new techniques due to the size and complexity of the road network. Deep learning and image processing techniques can be leveraged to address this requirement and build an automatic roadside severity detection system. In this work, we propose a novel roadside attribute and distance calculation technique that extends our previous work in this area (lane-line method). The past work depended on the detected lane-line widths to calculate the distances. This method made mistakes in the presence of challenging road conditions and misclassifications. Here, we propose to combine camera configuration data with a neural network detector to develop a distance vs pixel model for reliable road severity distance calculation. We use camera metadata to transform the 2D image data predicted by the deep neural network to a 3D space. The improved model was tested with a real-world dataset. Compared to the lane-line method, the new combined model reported 36% and 37.5% accuracy improvements in right and left-hand side distances, respectively.INDEX TERMS deep learning, fully convolutional neural networks, road safety attributes I. INTRODUCTION