Objective and quantitative assessment methods are needed for the fitting of hearing aid parameters. This paper proposes a novel speech discrimination assessment method using electroencephalograms (EEGs). The method utilizes event-related potentials (ERPs) to visual stimuli instead of the conventionally used auditory stimuli. A spoken letter is played through a speaker as an initial auditory stimulus. The same letter can then be visually displayed on a screen as a match condition, or a different letter is displayed (mismatch condition). The participant determines whether the two stimuli represent the same letter or not. The P3 component or late positive potential (LPP) component are elicited when a participant detects either a match or mismatch between the auditory and visual stimuli, respectively. The hearing ability of each participant can be estimated objectively via analysis of these ERP components.
In this article, we propose a transfer learning method using the multi-prediction deep Boltzmann machine (MPDBM). In recent years, deep learning has been widely used in many applications such as image classification and object detection. However, it is hard to apply a deep learning method to medical images because the deep learning method needs a large number of training data to train the deep neural network. Medical image datasets such as X-ray CT image datasets do not have enough training data because of privacy. In this article, we propose a method that re-uses the network trained on non-medical images (source domain) to improve performance even if we have a small number of medical images (target domain). Our proposed method firstly trains the deep neural network for solving the source task using the MPDBM. Secondly, we evaluate the relation between the source domain and the target domain. To evaluate the relation, we input the target domain into the deep neural network trained on the source domain. Then, we compute the histograms based on the response of the output layer. After computing the histograms, we select the variables of the output layer corresponding to the target domain. Then, we tune the parameters in such a way that the selected variables respond as the outputs of the target domain. In this article, we use the MNIST dataset as the source domain and the lung dataset of the X-ray CT images as the target domain. Experimental results show that our proposed method can improve classification performance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.