We have developed six convolutional neural network (CNN) models for finding optimal brain tumor detection system on high‐grade glioma and low‐grade glioma lesions from voluminous magnetic resonance imaging human brain scans. Glioma is the most common form of brain tumor. The models are constructed based on the different combinations and settings of hyperparameters with conventional CNN architecture. The six models are two layers with five epochs, five layers with dropout, five layers with stopping criteria (FLSC), FLSC and dropout (FLSCD), FLSC and batch normalization (FLSCBN), and FLSCBN and dropout. The models were trained and tested with BraTS2013 and whole brain atlas data sets. Among them, FLSCBN model yielded the best classification results for brain tumor detection. Experimental results revealed that our deep learning approach was better than the conventional state‐of‐art methods.
AI‐based tools were developed in the existing works, which focused on one type of image data; either CXR's or computerized tomography (CT) scans for COVID‐19 prediction. There is a need for an AI‐based tool that predicts COVID‐19 detection from chest images such as Chest X‐ray (CXR) and CT scans given as inputs. This research gap is considered the core objective of the proposed work. In the proposed work, multimodal CNN architectures were developed based on the parameters and hyperparameters of neural networks. Nine experiments evaluate optimizers, learning rates, and the number of epochs. Based on the experimental results, suitable parameters are fixed for multimodal architecture development for COVID‐19 detection. We have constructed a bespoke convolutional neural network (CNN) architecture named multimodal covid network (MMCOVID‐NET) by varying the number of layers from two to seven, which can predict covid or normal images from both CXR's and CT scans. In the proposed work, we have experimented by constructing 24 models for COVID‐19 prediction. Among them, four models named MMCOVID‐NET‐I, MMCOVID‐NET‐II, MMCOVID‐NET‐III, and MMCOVID‐NET‐IV performed well by producing an accuracy of 100%. We obtained these results from a small dataset. So we repeated these experiments in a larger dataset. We inferred that MMCOVID‐NET‐III outperformed all the state‐of‐the‐art methods by producing an accuracy of 99.75%. The experiments carried out in this work conclude that the parameters and hyperparameters play a vital role in increasing or decreasing the model's performance.
One of the most essential life skills is to be able to communicate easily. In order to produce greater comprehension, communication is described as transmitting knowledge. Communication and technologies are not mutually exclusive. Speech Recognition is a technique that facilitates the processing of voice information to text and is independent of the speaker. This enables it to be used in various applications, from digital assistants to machinery control. The aim of this paper is to study numerous robotic vehicles powered by human speech commands. To accomplish this functionality, most of these systems run with the use of an android smart phone that transmits voice commands to a raspberry pi. The voice-operated robot is used to build one moving object. It is moved as per the voice recognition module commands, and the robot obtains that command. The robot compares the command with the stored software and then sets the command using wireless communication as per voice. These suggested methods would be useful for devices such as assistive robotics for people with disabilities or automotive applications such as work robots.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.