High-resolution X-ray computed tomography (CT) instruments, also known as three-dimensional (3D) X-ray microscopes, can be adapted for dimensional metrology applications such as geometric dimensioning and tolerancing of metallic components. However, CT scanning times can be prohibitively high for industrial measurement inspection tasks owing to the poor contrast from X-ray attenuation in Ferrous metals, especially if the measurement of spatial resolutions under 5 µm are required. This paper describes a software-defined approach to dramatically reducing total exposure time (or scanning time) while maintaining resolution loss within 2 micrometers as compared to the baseline scans acquired over 6 hours. Here, we combine two deep learning (DL) codes in our surface extraction workflow to compensate for lower signal-to-noise ratio in short exposure data (acquired with lower number of projections): (1) a surface determination (post-reconstruction) , and (2) a denoising algorithm (pre-reconstruction). Training data was acquired from a scan of an 8-hole automotive fuel injector (sample 1) with a 165 µm nominal diameter per hole. For testing the accuracy of the workflow, a separate scan of a 6-hole side-mount injector (sample 2) was acquired. For both samples, the acquired X-ray projections (or radiographs) were binned down to 10X such as to simulate faster scans. For training and testing workflows, the full exposure scans (baseline) were used as target and the shorter exposure scans as inputs to the deep learning models. To determine loss of surface accuracy from the baseline case, a metric is formulated (in micrometers) and the trends are reported for when the total measurement time was reduced by up to 10X (up to 0.6 hours, using only 360 projections). We report that scan times can be reduced by over 10X while retaining the limiting the resolution loss to under 1 micrometer.