Underwater wireless sensor networks (UWSNs) is emerging as an advance terminology for monitoring and controlling the underwater aquatic life. This technology determines the undiscovered resources present in the water through computational intelligence (CI) techniques. CI here pertains to the capability of a system to acquire a specific task from data or experimental surveillance below the water. In today's time data is considered as the identity for everything that exists in nature, whether that data is related to human beings, machines or any type of device like internet of underwater things (IoUT). The collected data should be correct, complete and fulfill the requirements of a particular task to be done. Underwater data collection is very tough because of sensors mobility due to water drift 3 meters/sec, crest and trough. A lot of packet drop also exists due to underwater conditions that hurdles the data collection process. Various techniques already exists for efficient collection of data below the water but these are not properly classified. This manuscript has summarized the concept of data collection in UWSN along with its classification based on routing. Also, a short discussion about existence of CORONA below the water along with water purification is carried out. Furthermore, some data routing approaches are also analyzed on the basis of quality of service parameters and the current challenges to be tackled during data collection are also discussed. INDEX TERMS Acoustic sensor network, coronavirus (COVID-19), computational intelligence, routing, underwater sensor network DIVYA ANAND received the Ph.D. degree in computer science and engineering and Masters of Technology in information security from the Lovely Professional University. She has expertise in Teaching, Entrepreneurship and Research and Development. She is currently an Assistant Professor with the Lovely Professional University. She has published over 20 conferences and journal articles. Her research interests include networks security, bioinformatics, machine learning, gene identification, big data analytics and computational models.
Tomato is one of the most essential and consumable crops in the world. Tomatoes differ in quantity depending on how they are fertilized. Leaf disease is the primary factor impacting the amount and quality of crop yield. As a result, it is critical to diagnose and classify these disorders appropriately. Different kinds of diseases influence the production of tomatoes. Earlier identification of these diseases would reduce the disease’s effect on tomato plants and enhance good crop yield. Different innovative ways of identifying and classifying certain diseases have been used extensively. The motive of work is to support farmers in identifying early-stage diseases accurately and informing them about these diseases. The Convolutional Neural Network (CNN) is used to effectively define and classify tomato diseases. Google Colab is used to conduct the complete experiment with a dataset containing 3000 images of tomato leaves affected by nine different diseases and a healthy leaf. The complete process is described: Firstly, the input images are preprocessed, and the targeted area of images are segmented from the original images. Secondly, the images are further processed with varying hyper-parameters of the CNN model. Finally, CNN extracts other characteristics from pictures like colors, texture, and edges, etc. The findings demonstrate that the proposed model predictions are 98.49% accurate.
The world is experiencing an unprecedented crisis due to the coronavirus disease (COVID-19) outbreak that has affected nearly 216 countries and territories across the globe. Since the pandemic outbreak, there is a growing interest in computational model-based diagnostic technologies to support the screening and diagnosis of COVID-19 cases using medical imaging such as chest X-ray (CXR) scans. It is discovered in initial studies that patients infected with COVID-19 show abnormalities in their CXR images that represent specific radiological patterns. Still, detection of these patterns is challenging and time-consuming even for skilled radiologists. In this study, we propose a novel convolutional neural network- (CNN-) based deep learning fusion framework using the transfer learning concept where parameters (weights) from different models are combined into a single model to extract features from images which are then fed to a custom classifier for prediction. We use gradient-weighted class activation mapping to visualize the infected areas of CXR images. Furthermore, we provide feature representation through visualization to gain a deeper understanding of the class separability of the studied models with respect to COVID-19 detection. Cross-validation studies are used to assess the performance of the proposed models using open-access datasets containing healthy and both COVID-19 and other pneumonia infected CXR images. Evaluation results show that the best performing fusion model can attain a classification accuracy of 95.49% with a high level of sensitivity and specificity.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.