The rapidly evolving landscape of medical imaging necessitates innovative approaches to enhance diagnostic accuracy, particularly in liver and tumor segmentation. Current methodologies often grapple with limitations such as the need for extensive labeled datasets and the challenge of generalizing across diverse pathological presentations. Addressing these constraints, this study introduces a novel deep learning model that amalgamates the prowess of U-Net architecture, Generative Adversarial Networks (GANs), and Monte Carlo Dropout techniques, each contributing uniquely to overcome the existing barriers. Predominantly, the traditional segmentation models are hindered by their dependency on large, annotated datasets, which are not only scarce but also labor-intensive to produce. Further, these models frequently struggle to maintain consistent performance across varying imaging conditions, a pivotal requirement in medical diagnostics. To surmount these challenges, our proposed model employs a fine-tuning methodology using a U-Net architecture pre-trained on extensive medical image datasets like NIH Chest X-ray or MIMIC-CXR. This strategy leverages the pre-existing knowledge within the network, significantly enhancing the model's ability to discern and adapt to the specific characteristics inherent in liver and tumor images & samples. In a parallel vein, the model harnesses the capabilities of GANs for data augmentation, generating synthetic yet realistic medical images. This innovative use of GANs addresses the issue of dataset variability, equipping the model to generalize more effectively across a spectrum of liver and tumor appearances, ultimately bolstering its robustness. This approach is particularly advantageous in handling cases with diverse lighting, angles, or pathological conditions, traditionally a stumbling block for segmentation models. Moreover, the incorporation of Monte Carlo Dropout for uncertainty estimation marks a significant stride in the realm of clinical applicability. This technique yields pixel-wise uncertainty maps, offering invaluable insights into the model's confidence levels in its predictions. Such transparency is crucial in clinical settings, where understanding the model's certainty can guide more informed and cautious decision-making, particularly in scenarios where the cost of erroneous segmentation is high for different scenarios. Empirical evaluation of this composite model on Kaggle datasets has demonstrated its superior performance, evidenced by improvements in precision (3.9%), accuracy (4.9%), recall (4.5%), AUC (8.5%), and a reduction in delay (3.5%) compared to existing methods. The culmination of these advancements not only enhances the efficacy of liver and tumor segmentation but also paves the way for broader applications in medical imaging diagnostics.