Accurately identifying and contouring the organs at risk (OARs) is a crucial step in radiation treatment planning for precise dose calculation. This task becomes especially challenging in computed tomography (CT) images due to the irregular boundaries of the organs under study. The method currently employed in clinical practice is the manual contouring of CT images, which tends to be highly tedious and time‐consuming. The results are also prone to variations depending on the observer's skill level, environment, or equipment types. A deep learning‐based automatic contouring technique for segmenting OARs would help eliminate these problems and generate consistent results with minimal time and human effort. Our approach is to design a conditional generative adversarial network (GAN)‐based technique for the semantic segmentation of OARs in abdominal CT images. The residual blocks of the generator network have a multi‐scale context layer that explores more generic characteristics, greatly enhancing performance and lowering losses. A comparative analysis is undertaken based on various assessment measures widely employed in segmentation. The results show substantial improvement, with mean dice scores of 98.0%, 96.6%, 98.2%, and 86.1% for the respective organs—liver, kidney, spleen, and pancreas—in the abdominal CT. The proposed GAN‐based model could accurately segment the four abdominal organs, including the liver, kidney, spleen, and pancreas. The obtained results prove that the suggested model is able to compete with existing state‐of‐the‐art abdominal OAR segmentation techniques.