U-values of building elements are often determined using point measurements, where infrared imagery may be used to identify a suitable location for these measurements. Current methods identify that surface areas exhibiting a homogeneous temperature-away from regions of thermal bridging-can be used to obtain U-values. In doing so, however, the resulting U-value is assumed to represent that entire building element, contrary to the information given by the initial infrared inspection. This can be problematic when applying these measured U-values to models for predicting energy performance. Three techniques have been used to measure the U-values of external building elements of a full-scale replica of a pre-1920s U.K. home under controlled conditions: point measurements, using heat flux meters, and two variations of infrared thermography at high and low resolutions. U-values determined from each technique were used to calibrate a model of that building and predictions of the heat transfer coefficient, annual energy consumption, and fuel cost were made. Point measurements and low-resolution infrared thermography were found to represent a relatively small proportion of the overall U-value distribution. By propagating the variation of U-values found using high-resolution thermography, the predicted heat transfer coefficient (HTC) was found to vary between 183 W/K to 235 W/K (±12%). This also led to subsequent variations in the predictions for annual energy consumption for heating (between 4923 kWh and 5481 kWh, ±11%); and in the predicted cost of that energy consumption (between £227 and £281, ±24%). This variation is indicative of the sensitivity of energy simulations to sensor placement when carrying out point measurements for U-values.