The field of autonomous driving leaves minimal margins for error. Ensuring that self-driving vehicles possess the ability to accurately perceive their surroundings, even amidst conditions of limited visibility, is of utmost importance. We propose a novel approach to enhance the precision of object detection on the road during limited visibility driving or roadway conditions. The initial step involves the classification of the driving condition of an input image, and then, the corresponding semantic segmentation model will process the image to distinguish objects. Our dataset consists of roadway images depicting 20 distinct objects amidst adverse limited visibility conditions. The experimental results validate our approach, with the proposed method displaying quality accuracy levels for training, validation, and testing data. Our classification model achieved 100% accuracy. Particularly, the proposed methods achieved final mean IoU scores of 57.3%, 32.0%, 49.4%, and 47.8%, respectively, for FOG, NIGHT, RAIN, and SNOW conditions when using the U-NET model for segmentation. These mean IoU results are better than the traditional nonhierarchical training methods, which utilize the same U-NET structure.