For many applications pertaining to neuroimaging, social science, international relations, chemometrics, genomics, and molecular omics, datasets often involve variables which are best represented in the form of a multidimensional array or
tensor
, which extends the familiar two‐way data matrix into higher dimensions. Rather than vectorizing tensor‐valued variables prior to analysis which results in loss of inference, new methods have emerged developing regression relationships between variables with either tensor‐valued response(s) or predictor(s). Bayesian approaches, in particular, have shown great promise in applications pertaining to tensor regressions. A remarkable feature of fully Bayesian approaches is that they allow flexible modeling of tensor‐valued parameters in the regressions involving tensor variables and naturally offer characterization of uncertainty in the parametric and predictive inferences. This article provides a review of some relevant Bayesian models on tensor regressions developed in recent years. We divide methods according to the objective of the analysis. We begin with tensor regression approaches with a scalar response and a tensor‐valued covariate and discuss both parametric and nonparametric modeling options and applications in this framework. We then address the problem of making inference with a tensor response and a vector of covariates, with applications including task‐related brain activation and connectivity studies. Finally, we offer discussion on Bayesian models involving a tensor response and a tensor covariate. Discussion of each model is accompanied by available results on its posterior contraction properties, laying out restrictions on key model parameters (such as the tensor dimensions) to draw accurate posterior inference.