Computer experiments are becoming increasingly important in scientific investigations.In the presence of uncertainty, analysts employ probabilistic sensitivity methods to identify the key-drivers of change in the quantities of interest. Simulation complexity, large dimensionality and long running times may force analysts to make statistical inference at small sample sizes. Methods designed to estimate probabilistic sensitivity measures at relatively low computational costs are attracting increasing interest. We propose a fully Bayesian approach to the estimation of probabilistic sensitivity measures based on a one-sample design. We discuss, first, new estimators based on placing piecewise constant priors on the conditional distributions of the output given each input, by partitioning the input space. We then present two alternatives, based on Bayesian non-parametric density estimation, which bypass the need for predefined partitions. In all cases, the Bayesian paradigm guarantees the quantification of uncertainty in the estimation process through the posterior distribution over the sensitivity measures, without requiring additional simulator evaluations. The performance of the proposed methods is compared to that of traditional point estimators in a series of numerical experiments comprising synthetic but challenging simulators, as well as a realistic application.Probabilistic (or global) sensitivity measures are an indispensable complement of uncertainty quantification, as they highlight which areas should be given priority when planning data collection or further modelling efforts. International agencies such as the US Environmental Protection Agency (U.S. Environmental Protection Agency, 2009) or the British National Institute for Health Care Excellence (NICE, 2013) and the European Commission (2009) have issued guidelines recommending the use of probabilistic sensitivity analysis methods as the gold standard for ensuring reliability and transparency when using the output of a computer code for decision-making under uncertainty. Over the years, several probabilistic sensitivity measures have been proposed. Different measures enjoy alternative properties making them preferable in different contexts and for different purposes. We recall regression-based (Helton and Sallaberry, 2009), variance-based (Saltelli and Tarantola, 2002;Jiménez Rugama and Gilquin, 2018) and moment-independent measures (Borgonovo et al., 2014), all of which offer alternative ways to quantify the degree of statistical dependence between the simulator inputs and the output. A transversal issue in realistic applications is that analytical expressions of these measures are unavailable and analysts must resort to estimation. This, however, is a challenging task, especially for simulators with a high number of inputs (the curse of dimensionality) or with long running times (high computational cost).Recent results (Strong et al., 2012;Strong and Oakley, 2013) evidence the one-sample (or given-data) approach as an attractive design, which allows ...