The restricted Boltzmann machine (RBM) is one of the widely used basic models in the field of deep learning. Although many indexes are available for evaluating the advantages of RBM training algorithms, the classification accuracy is the most convincing index that can most effectively reflect its advantages. RBM training algorithms are sampling algorithms essentially based on Gibbs sampling. Studies focused on algorithmic improvements have mainly faced challenges in improving the classification accuracy of the RBM training algorithms. To address the above problem, in this paper, we propose a fast Gibbs sampling (FGS) algorithm to learn the RBM by adding accelerated weights and adjustment coefficient. An important link based on Gibbs sampling theory was established between the update of the network weights and mixing rate of Gibbs sampling chain. The proposed FGS method was used to accelerate the mixing rate of Gibbs sampling chain by adding accelerated weights and adjustment coefficients. To further validate the FGS method, numerous experiments were performed to facilitate comparisons with the classical RBM algorithm. The experiments involved learning the RBM based on standard data. The results showed that the proposed FGS method outperformed the CD, PCD, PT5, PT10, and DGS algorithms, particularly with respect to the handwriting database. The findings of our study suggest the potential applications of FGS to real-world problems and demonstrate that the proposed method can build an improved RBM for classification.
As one of the essential deep learning models, a restricted Boltzmann machine (RBM) is a commonly used generative training model. By adaptively growing the size of the hidden units, infinite RBM (IRBM) is obtained, which possesses an excellent property of automatically choosing the hidden layer size depending on a specific task. An IRBM presents a competitive generative capability with the traditional RBM. First, a generative model called Gaussian IRBM (GIRBM) is proposed to deal with practical scenarios from the perspective of data discretization. Subsequently, a discriminative IRBM (DIRBM) and a discriminative GIRBM (DGIRBM) are established to solve classification problems by attaching extra-label units next to the input layer. They are motivated by the fact that a discriminative variant of an RBM can complete an individual framework for classification with better performance than some standard classifiers. Remarkably, the proposed models retain both generative and discriminative properties synchronously, that is, they can reconstruct data effectively and be established in considerable self-contained classifiers. The experimental results on image classification (both large and small), text identification, and facial recognition (both clean and noisy) reflect that a DIRBM and a DGIRBM are superior to some state-of-the-art RBM models in
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.