.Automated plant disease identification is one of the significant tasks in the agricultural sector. With the advent of deep-learning models, classification and segmentation have been completely automated using a convolutional neural network (CNN). This paper proposes the UV-Net model, with a recurrent encoder and an up-sampling decoder for multiclass image segmentation and generation of masks. The improvement of the segmentation model is the stacking of conv2D layers before the down convolution operation, concatenation of features extracted from the convolution process, up-sampling along with transpose convolution in the expansion path, and exponential linear unit (ELU) activation. This model is combined with an optimized, pretrained, and self-trained CNN for generating the classes of diseased leaves. The classification is further enhanced by concatenating the high-level features extracted from the pretrained CNN model along with the low-level features from the encoder part of the UV-Net model. The features are concatenated using the hybrid wind-driven particle swarm optimized (WDPSO) deep feature selective concatenation mechanism (DFSCM). The proposed model is trained and the performance is tested on multiclass diseases of tomato plants like bacterial spot, early blight, leaf mold, and target spot. The proposed architecture is also compared with some of the well-known CNN models such as FCN, FPN, U-Net, V-Net, and Mask R-CNN. The model proves to be a robust system for detecting diseases from images of varying sizes, shapes, and lighting. The proposed model achieved an average training accuracy of 98.20% with a dice coefficient of 0.8938 and an Intersection over union coefficient (IoU) of 0.7458. The model also generates accurate masks from which the intensity and pattern of diseases could be identified for earlier prevention and treatment of diseases.