In the present work, we demonstrated the upcycling technique of effective wastewater treatment via photocatalytic hydrogen production by using the nanocomposites of manganese oxide-decorated activated carbon (MnO2-AC). The nanocomposites were sonochemically synthesized in pure water by utilizing MnO2 nanoparticles and AC nanoflakes that had been prepared through green routes using the extracts of Brassica oleracea and Azadirachta indica, respectively. MnO2-AC nanocomposites were confirmed to exist in the form of nanopebbles with a high specific surface area of ~109 m2/g. When using the MnO2-AC nanocomposites as a photocatalyst for the wastewater treatment, they exhibited highly efficient hydrogen production activity. Namely, the high hydrogen production rate (395 mL/h) was achieved when splitting the synthetic sulphide effluent (S2− = 0.2 M) via the photocatalytic reaction by using MnO2-AC. The results stand for the excellent energy-conversion capability of the MnO2-AC nanocomposites, particularly, for photocatalytic splitting of hydrogen from sulphide wastewater.
The emergence of video surveillance is the most promising solution for people living independently in their home. Recently several contributions for video surveillance have been proposed. However, a robust video surveillance algorithm is still a challenging task because of illumination changes, rapid variations in target appearance, similar nontarget objects in background, and occlusions. In this paper, a novel approach of object detection for video surveillance is presented. The proposed algorithm consists of various steps including video compression, object detection, and object localization. In video compression, the input video frames are compressed with the help of two-dimensional discrete cosine transform (2D DCT) to achieve less storage requirements. In object detection, key feature points are detected by computing the statistical correlation and the matching feature points are classified into foreground and background based on the Bayesian rule. Finally, the foreground feature points are localized in successive video frames by embedding the maximum likelihood feature points over the input video frames. Various frame based surveillance metrics are employed to evaluate the proposed approach. Experimental results and comparative study clearly depict the effectiveness of the proposed approach.
Agriculture has been an important research area in the field of image processing for the last five years. Diseases affect the quality and quantity of fruits, thereby disrupting the economy of a country. Many computerized techniques have been introduced for detecting and recognizing fruit diseases. However, some issues remain to be addressed, such as irrelevant features and the dimensionality of feature vectors, which increase the computational time of the system. Herein, we propose an integrated deep learning framework for classifying fruit diseases. We consider seven types of fruits, i.e., apple, cherry, blueberry, grapes, peach, citrus, and strawberry. The proposed method comprises several important steps. Initially, data increase is applied, and then two different types of features are extracted. In the first feature type, texture and color features, i.e., classical features, are extracted. In the second type, deep learning characteristics are extracted using a pretrained model. The pretrained model is reused through transfer learning. Subsequently, both types of features are merged using the maximum mean value of the serial approach. Next, the resulting fused vector is optimized using a harmonic threshold-based genetic algorithm. Finally, the selected features are classified using multiple classifiers. An evaluation is performed on the PlantVillage dataset, and an accuracy of 99% is achieved. A comparison with recent techniques indicate the superiority of the proposed method.
The video sequences provide more information than the still images about how objects and scenarios change over time. However, video needs more space for storage and wider bandwidth for transmission.Hence, more challenges are encountered in retrieval and event detection in large data sets during the visual tracking. In the proposed method, the object planes are segmented properly and the motion parameters are derived for each plane to achieve a better compression ratio. Most of the existing tracking algorithms in dynamic scenes consider the target alone and the background information are often ignored.Therefore, they are failed to track the target. In order to optimize the existing system, a robust visual tracking algorithm is to be developed which will adapt the drastic changes of target appearance without background influence. The initial occlusion of non target objects in the background can effectively be addressed by the integration of multiple cues and spatial information in target representation. With the combination of motion information and detection methods, the target can be reacquired when complete occlusion of target occurs.
resultant experimental values highlighted the supremacy of IDLDD-BTI model over other state-of-the-art methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.