Both spatial and non-spatial data are critical to a country's social and economic development. Land use and land cover (LULC) maps are data that can be used to retrieve land information and detect environmental changes. Several recent studies have used deep learning approaches to generate LULC maps from various remote-sensing image formats. They have correctly recognized several land cover elements such as built-up areas, forests, water, etc. On the other hand, spaceborne radar images must be used to test deep learning methods because radar sensors operate in all weather and lighting circumstances and have diverse image characteristics depending on the wavelength. Although many researchers have applied deep learning techniques to classify the land cover features, most of them need to do more by taking less or more several land cover features and taking proper evaluation metrics to justify the classification techniques. The current research shows segmentation on Sentinel-1 C-band synthetic aperture radar (SAR) images of Gujarat's Ahmedabad, Anand, and Kheda regions using four deep-learning models. U-Net, SegNet, FCN, and DeepLabV3+ have primarily been employed in prior research to classify SAR and other satellite image formats and showed excellent results in identifying different land cover features. Training samples of five primary classes for ground-truth data were obtained using Google Earth and Sentinel-2 images as reference maps. For segmenting the Sentinel-1 image, above mentioned four deep-learning semantic segmentation models were chosen and fine-tuned. All models did exceptionally well, with overall accuracy ranging from 76% to 90%. U-Net fared well, followed by DeepLabV3+ in classifying all five land cover features, with mean intersection over union (IoU) of 0.67 and 0.55, respectively. The results reveal that using a self-generated reference map, the Sentinel-1 dual-pol images can categorize the primary land cover features.