This study investigates the significance of trends of four temperature time series-Central England Temperature (CET), Stockholm, Faraday-Vernadsky, and Alert. First the robustness and accuracy of various trend detection methods are examined: ordinary least squares, robust and generalized linear model regression, Ensemble Empirical Mode Decomposition (EEMD), and wavelets. It is found in tests with surrogate data that these trend detection methods are robust for nonlinear trends, superposed autocorrelated fluctuations, and non-Gaussian fluctuations. An analysis of the four temperature time series reveals evidence of long-range dependence (LRD) and nonlinear warming trends. The significance of these trends is tested against climate noise. Three different methods are used to generate climate noise: (i) a short-range-dependent autoregressive process of first order [AR(1)], (ii) an LRD model, and (iii) phase scrambling. It is found that the ability to distinguish the observed warming trend from stochastic trends depends on the model representing the background climate variability. Strong evidence is found of a significant warming trend at Faraday-Vernadsky that cannot be explained by any of the three null models. The authors find moderate evidence of warming trends for the Stockholm and CET time series that are significant against AR(1) and phase scrambling but not the LRD model. This suggests that the degree of significance of climate trends depends on the null model used to represent intrinsic climate variability. This study highlights that in statistical trend tests, more than just one simple null model of intrinsic climate variability should be used. This allows one to better gauge the degree of confidence to have in the significance of trends.