This article proposes a deep neural network (DNN) model to predict the electric field induced by a transcranial magnetic stimulation (TMS) coil under high-amplitude and low-frequency current pulse conditions. The DNN model is comprised of an input layer with 6 neurons, three non-linear hidden layers with a total of 1088 neurons, and a linear single output layer. The model is developed in Google Colaboratory environment with TensorFlow framework using six features including coil turns of single wing, coil thickness, coil diameter, distance between two wings, distance between head and coil position, and angle between two wings of coil as the inputs and electric field as the output. The model performance is evaluated based on four verification statistic metrics such as coefficient of determination (R 2 ), mean squared error (MSE), mean absolute error (MAE), and root mean squared error (RMSE) between the simulated and predicted values. The proposed model provides an adequate performance with R 2 = 0.766, MSE = 0.184, MAE = 0.262, and RMSE = 0.429 in the testing stage. Therefore, the model can successfully predict the electric field in an assembly TMS coil without the aid of electromagnetic simulation software that suffers from an extensive computational cost.
Current deep learning-assisted brain tumor classification models sustain inductive bias and parameter dependency problems for extracting texture-based image information. Thereby concerning these problems, the recent development of the vision transformer model has substituted the DL model for classification tasks. However, the high performance of the vision transformer model depends on a large-scale dataset as well as self-attention calculations between the number of image patches which result in a quadratic computational complexity. To address these problems, the vision transformer must be data-efficient to be well-trained with a limited amount of data, and the computational complexity must be linear with the number of image patches. Consequently, this paper presents a novel linear-complexity data-efficient image transformer called LCDEiT for training with small-size datasets by using a teacher-student strategy and linear computational complexity concerning the number of patches using an external attention mechanism. The teacher model comprised a custom gated-pooled convolutional neural network to provide knowledge to the transformer-based student model for the classification of MRI brain tumors. The average classification accuracy and F1-score for two benchmark datasets including Figshare and BraTS-21 are found 98.11% and 97.86% and 93.69% and 93.68% respectively. The results indicate that the proposed model could have a great impact on medical imaging-based diagnosis where data availability and faster computations are the main concern.
The current advancement towards retinal disease detection mainly focused on distinct feature extraction using either a convolutional neural network (CNN) or a transformer-based end-to-end deep learning (DL) model. The individual end-to-end DL models are capable of only processing texture or shape-based information for performing detection tasks. However, extraction of only texture- or shape-based features does not provide the model robustness needed to classify different types of retinal diseases. Therefore, concerning these two features, this paper developed a fusion model called ‘Conv-ViT’ to detect retinal diseases from foveal cut optical coherence tomography (OCT) images. The transfer learning-based CNN models, such as Inception-V3 and ResNet-50, are utilized to process texture information by calculating the correlation of the nearby pixel. Additionally, the vision transformer model is fused to process shape-based features by determining the correlation between long-distance pixels. The hybridization of these three models results in shape-based texture feature learning during the classification of retinal diseases into its four classes, including choroidal neovascularization (CNV), diabetic macular edema (DME), DRUSEN, and NORMAL. The weighted average classification accuracy, precision, recall, and F1 score of the model are found to be approximately 94%. The results indicate that the fusion of both texture and shape features assisted the proposed Conv-ViT model to outperform the state-of-the-art retinal disease classification models.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.