The brain tumor, the most common and aggressive disease, leads to a very shorter lifespan. Thus, planning treatments is a crucial step in improving a patient's quality of life. In general, several image techniques such as CT, MRI, and ultrasound have been used for assessing tumors in the prostate, breast, lung, brain, etc. Primarily, MRI images are applied to detect tumors in the brain during this work. The enormous amount of data produced by the MRI scan thwarts tumor vs. non-tumor manual classification at a particular time. Unfortunately, with a small number of images, it has certain limitations (i.e., precise quantitative measurements). Therefore, an automated classification system is necessary to avoid human mortality. The automatic categorization of brain tumors in the surrounding tumor region is a challenging task concerning space and structural variability. Four deep learning models: AlexNet, VGG16, GoogleNet, and RestNet50, are used in this comparative study to classify brain tumors. Based on accuracy, the results showed that RestNet50 is the best model with an accuracy of 95.8%, while AlexNet has the fast performance with a processing time of 1.2 seconds. In addition, a hardware parallel processing unit (GPU) is employed for real-time purposes, where AlexNet (the fastest model) has a processing time of only 8.3 msec.
Fingerprints are the most widely used form of human identification and verification due to their uniqueness and permanence. For that reason, many Automatic Fingerprint Identification Systems (AFIS) have been commercially produced and accepted by the international community. Though their performance is good, there is still room for improvement. One of the main concerns is poor fingerprint images that are caused by capturing devices. Thus, to improve the efficiency of AFIS, both image enhancement and feature extraction methods are required to be implemented. An effective feature extraction depends on the quality of its image whereby high image quality would normally produce genuine features. On the other hand, poor quality would lead to fake features that will result in false acceptance. This paper reviews several state-of-the-art methods of fingerprint image pre-processing including gray level normalization, noise removal and segmentation.
<span>Internet of Things technology allows many devices to connect with each other. The interaction could be between humans and devices or between devices itself. In fact, the data are traveling between the devices through the media within the boundary, and it could be traveling outside the boundary when it required to be analyzed or stored in the cloud through the internet. Due the transmission media and internet, the data are vulnerable to attacks. Thus, the data need to be encrypted strongly for the purpose of protection. Usually, most of the encryption techniques will consume computer resources. In this work, we divide the data that are used in the IoT environment into three levels of sensitivity which are low, medium and high sensitive data to leverage the computer resources such as time of encryption and decryption, battery usage and so on. A framework is proposed in this work to encrypt the data depends on the level of sensitivity using the machine learning K nearest neighbors (K-NN).</span>
Exact String matching considers is one of the important ways in solving the basic problems in computer science. This research proposed a hybrid exact string matching algorithm called E-Atheer. This algorithm depended on good features; searching and shifting techniques in the Atheer and Berry-Ravindran algorithms, respectively. The proposed algorithm showed better performance in number of attempts and character comparisons compared to the original and recent and standard algorithms. E-Atheer algorithm used several types of databases, which are DNA, Protein, XML, Pitch, English, and Source. The best performancein the number of attempts is when the algorithm is executed using the pitch dataset. The worst performance is when it is used with DNA dataset. The best and worst databases in the number of character comparisons with the E-Atheer algorithm are the Source and DNA databases, respectively.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.