4 1 Abstract 5 A multiple-trait Bayesian LASSO (MBL) for genome-based analysis and prediction of quanti-6 tative traits is presented and applied to two real data sets. The data-generating model is a 7 multivariate linear Bayesian regression on possibly a huge number of molecular markers, and 8 with a Gaussian residual distribution posed. Each (one per marker) of the T 1 vectors of 9 regression coe¢ cients (T : number of traits) is assigned the same T variate Laplace prior dis-10 tribution, with a null mean vector and unknown scale matrix : The multivariate prior reduces 11 to that of the standard univariate Bayesian LASSO when T = 1: The covariance matrix of the 12 residual distribution is assigned a multivariate Je¤reys prior and is given an inverse-Wishart 13 prior. The unknown quantities in the model are learned using a Markov chain Monte Carlo sam-14 pling scheme constructed using a scale-mixture of normal distributions representation. MBL is 15 demonstrated in a bivariate context employing two publicly available data sets using a bivariate 16 genomic best linear unbiased prediction model (GBLUP) for benchmarking results. The …rst 17data set is one where wheat grain yields in two di¤erent environments are treated as distinct 18 traits. The second data set comes from genotyped Pinus trees with each individual was mea-19 sured for two traits, rust bin and gall volume. In MBL, the bivariate marker e¤ects are shrunk 20 1 di¤erentially, i.e., "short" vectors are more strongly shrunk towards the origin than in GBLUP; 21 conversely, "long" vectors are shrunk less. A predictive comparison was carried out as well where, 22 in wheat, the comparators of MBL where bivariate GBLUP and bivariate Bayes C ; a variable 23 selection procedure. A training-testing layout was used, with 100 random reconstructions of 24 training and testing sets. For the wheat data, all methods produced similar predictions. In 25 The preceding is the density of a double exponential (DE) distribution with null mean, parameter 168 p 4 and variance V ar( ) = 8 . As mentioned earlier, Tibshirani (1996) and Park and Casella 169 (2008) used the DE distribution as conditional (given ) prior for regression coe¢ cients in the 170 BL, a member of the "Bayesian Alphabet" (Gianola et al. 2009). Gianola et al. (2018) assigned 171 the DE distribution to residuals of a linear model for the purpose of attenuating outliers and Li 172 et al. (2015) used the MLAP distribution for the residuals in a "robust" linear regression model 173 for QTL mapping. 174 MLAP is therefore an interesting candidate prior for multi-trait marker e¤ects in a multiple 175 trait generalization of the Bayesian LASSO (MBL). A zero-mean MLAP distribution has a 176 235 IW; the kernel of the density is often written as exp 1 2 tr R 1 0 (N + T )) S e , where S e = 236 S e = (N + T ) : 237 256 Hastings algorithm tailored for making draws from the distribution having density (29). A brief 257 description of the procedure follows. 258 10 Laplace distribution. First, six independent chains of 1500 ...