Spark SQL has been widely deployed in industry but it is challenging to tune its performance. Recent studies try to employ machine learning (ML) to solve this problem. They however suffer from two drawbacks. First, it takes a long time (high overhead) to collect training samples. Second, the optimal configuration for one input data size of the same application might not be optimal for others.To address these issues, we propose a novel Bayesian Optimization (BO) based approach named LOCAT to automatically tune the configurations of Spark SQL applications online. LOCAT innovates three techniques. The first technique, named QCSA, eliminates the configuration-insensitive queries by Query Configuration Sensitivity Analysis (QCSA) when collecting training samples. The second technique, dubbed DAGP, is a Datasize-Aware Gaussian Process (DAGP) which models the performance of an application as a distribution of functions of configuration parameters as well as input data size. The third technique, called IICP, Identifies Important Configuration Parameters (IICP) with respect to performance and only tunes the important parameters. As such, LOCAT can tune the configurations of a Spark SQL application with low overhead and adapt to different input data sizes.We employ Spark SQL applications from benchmark suitesπ ππΆβ π·π, π ππΆ β π» , and π»ππ΅πππβ running on two significantly different clusters, a four-node ARM cluster and an eight-node x86 cluster, to evaluate LOCAT. The experimental results on the ARM cluster show that LOCAT accelerates the optimization procedures of Tuneful [22], DAC [66], GBO-RL [36], and QTune [37] by factors of 6.4Γ, 7.0Γ, 4.1Γ, and 9.7Γ on average, respectively. On the x86 cluster, LOCAT reduces the optimization time of Tuneful, DAC, GBO-RL, and QTune by factors of 6.4Γ, 6.3Γ, 4.0Γ, and 9.2Γ on average, respectively. Moreover, LOCAT improves the performance of the applications on