Bayesian optimization (BO) has been used for a wide range of applications. However, multi-objectives (e.g., time and cost of moving people and/or products) are often found in practical situations, and thus, multi-objective optimization is often required. In addition, correlated observations are found in various applications, e.g., materials discovery. To deal with such situations effectively, random scalarizations and vector-valued Gaussian processes (GPs) are adopted. For multi-objective BO (MOBO) using random scalarizations (linear and Tchebychev schemes) and vector-valued GPs, regret bounds are analyzed. In terms of Tchebychev scalarizations, these are sub-linear in the number of rounds. Bayes risk decomposition of the GP upper confidence bound (UCB) is utilized for theoretical analysis. In addition, a weight distribution for scalarizing objective functions is estimated by nonstationary Thompson sampling where the Dirichlet distribution is a prior, thereby achieving automatic weight adaptations according to the progress of optimization. Experimental results generated from 10 benchmark functions and one real application (a hydrogen storage material database) testify to the effectiveness of using vector-valued GPs compared to real-valued GPs and the weight adaptations. Moreover, under very noisy observations in the experimental settings, real-valued GPs are found to work better than vector-valued GPs in the context of MOBO convergence. This is because of random fluctuation in posterior variance caused by noisy observations in the UCB.