This paper introduces an innovative model for precise extraction of lung nodules from 3D computed tomography (CT) scans. Our approach comprises two essential preprocessing stages aimed at refining search accuracy and nodule segmentation. Initially, we leverage a two-level joint Markov-Gibbs random field (MGRF) model to delineate the lung region, effectively distinguishing lung wall nodules from the chest region with shared visual characteristics. Subsequently, employing a deep learning U-net technique, we pinpoint the region of interest (ROI) housing the lung nodule, minimizing the inclusion of surrounding lung tissues. Further enhancement comes from a 3D U-net, trained with a novel loss function to mitigate under-or over-segmentation issues. The resulting segmentation robustly outlines lung nodules in terms of morphology and volume metrics, validated by Dice coefficient (DCE), absolute volume difference (AVD), 95 th -percentile Hausdorff distance (HD), sensitivity, and specificity metrics. To assess our approach, we conducted comprehensive experiments. Our evaluation encompasses in vivo data from 50 patients and employs 679 subjects from the publicly available dataset of the Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI). The LIDC-IDRI dataset, a seminal resource for computer-aided diagnosis (CAD) in lung nodules, offers annotations enabling tasks like detection, segmentation, classification, and quantification. Our experiments showcase our model's superiority over existing deep learning methods, particularly evident in metrics such as the 95 th -percentile HD and DCE. While limited demographic information constrains a comprehensive analysis, our approach's robust performance underlines its potential integration into nodule assessment AI systems.