Numerical modelling of welding processes is often completed using a sequentially coupled FE thermo-mechanical analysis to predict both the thermal and mechanical effects induced by the process. The accuracy of the predicted residual stresses and distortions are highly dependent upon an accurate representation of the thermal field. Utilising this approach, the physics of the melt pool are replaced with a heat source model which represents the heat flux distribution of the process. Many heat source models exist; however, the parameters which define the geometrical distribution have to be calibrated using experimental data. Currently the most common method involves trial and error, until the predicted thermal history and melt pool geometry accurately represent the experimental data. Although this is a simple approach, it is often time dependant and inherently inaccurate. Therefore, this study presents an automated calibration process, which determines the optimum element size for the FE mesh and then refines the parameters of the heat source model using an inverse approach. The proposed procedure was implemented for laser beam welding, operating in both the conductive and keyhole regimes. To ensure that both the thermal history data and melt pool geometry were predicted with accuracy, a multi-objective optimisation was required. The proposed methodology was experimentally validated through welding nine IN718 samples using a Nd:YAG laser heat source. A good correlation between the experimental and numerical data sets were apparent. With regards to the predicted melt pool geometry, the maximum error for the width, depth and area of the melt pool was 8.4%, 4.0% and 11.0% respectively. The minimum error was 1.5%, 0.3% and 0.3% respectively. For the temperature profiles, the maximum and minimum error for the peak temperature was 8.6% and 1.2%. Overall, the proposed calibration procedure allows automation of an