Cloud computing is a virtualized, scalable, ubiquitous, and distributed computing paradigm that provides resources and services dynamically in a subscription based environment. Cloud computing provides services through Cloud Service Providers (CSPs). Cloud computing is mainly used for delivering solutions to a large number of business and scientific applications. Large-scale scientific applications are evaluated through cloud computing in the form of scientific workflows. Scientific workflows are dataintensive applications, and a single scientific workflow may be comprised of thousands of tasks. Deadline constraints, task failures, budget constraints, improper organization and management of tasks can cause inconvenience in executing scientific workflows. Therefore, we proposed a fault-tolerant and data-oriented scientific workflow management and scheduling system (FD-SWMS) in cloud computing. The proposed strategy applies a multi-criteria-based approach to schedule and manage the tasks of scientific workflows. The proposed strategy considers the special characteristics of tasks in scientific workflows, i.e., the scientific workflow tasks are executed simultaneously in parallel, in pipelined, aggregated to form a single task, and distributed to create multiple tasks. The proposed strategy schedules the tasks based on the data-intensiveness, provides a fault tolerant technique through a cluster-based approach, and makes it energy efficient through a load sharing mechanism. In order to find the effectiveness of the proposed strategy, the simulations are carried out on WorkflowSim for Montage and CyberShake workflows. The proposed FD-SWMS strategy performs better as compared with the existing state-of-the-art strategies. The proposed strategy on average reduced execution time by 25%, 17%, 22%, and 16%, minimized the execution cost by 24%, 17%, 21%, and 16%, and decreased the energy consumption by 21%, 17%, 20%, and 16%, as compared with existing QFWMS, EDS-DC, CFD, and BDCWS strategies, respectively for Montage scientific workflow. Similarly, the proposed strategy on average reduced execution time by 48%, 17%, 25%, and 42%, minimized the execution cost by 45%, 11%, 16%, and 38%, and decreased the energy consumption by 27%, 25%, 32%, and 20%, as compared with existing QFWMS, EDS-DC, CFD, and BDCWS strategies, respectively for CyberShake scientific workflow.
Rapid development in sketch-to-image translation methods boosts the investigation procedure in law enforcement agencies. But, the large modality gap between manually generated sketches makes this task challenging.Generative adversarial network (GAN) and encoder-decoder approach are usually incorporated to accomplish sketchto-image generation with promising results. This paper targets the sketch-to-image translation with heterogeneous face angles and lighting effects using a multi-level conditional generative adversarial network. The proposed multilevel cGAN work in four different phases. Three independent cGANs' networks are incorporated separately into each stage, followed by a CNN classifier. The Adam stochastic gradient descent mechanism was used for training with a learning rate of 0.0002 and momentum estimates β1 and β2 as 0.5 and 0.999, respectively. The multi-level 3Dconvolutional architecture help to preserve spatial facial attributes and pixel-level details. The 3D convolution and deconvolution guide the G1, G2 and G3 to use additional features and attributes for encoding and decoding. This helps to preserve the direction, postures of targeted image attributes and special relationships among the whole image's features. The proposed framework process the 3D-Convolution and 3D-Deconvolution using vectorization. This process takes the same time as 2D convolution but extracts more features and facial attributes. We used pre-trained ResNet-50, ResNet-101, and Mobile-Net to classify generated high-resolution images from sketches. We have also developed, and state-of-the-art Pakistani Politicians Face-sketch Dataset (PPFD) for experimental purposes. Result reveals that the proposed cGAN model's framework outperforms with respect to Accuracy, Structural similarity index measure (SSIM), Signal to noise ratio (SNR), and Peak signal-to-noise ratio (PSNR).
After declaring COVID-19 pneumonia as a pandemic, researchers promptly advanced to seek solutions for patients fighting this fatal disease. Computed tomography (CT) scans offer valuable insight into how COVID-19 infection affects the lungs. Analysis of CT scans is very significant, especially when physicians are striving for quick solutions. This study successfully segmented lung infection due to COVID-19 and provided a physician with a quantitative analysis of the condition. COVID-19 lesions often occur near and over parenchyma walls, which are denser and exhibit lower contrast than the tissues outside the parenchyma. We applied Adoptive Wallis and Gaussian filter alternatively to regulate the outlining of the lungs and lesions near the parenchyma. We proposed a context-aware conditional generative adversarial network (CGAN) with gradient penalty and spectral normalization for automatic segmentation of lungs and lesion segmentation. The proposed CGAN implements higher-order statistics when compared to traditional deep-learning models. The proposed CGAN produced promising results for lung segmentation. Similarly, CGAN has shown outstanding results for COVID-19 lesions segmentation with an accuracy of 99.91%, DSC of 92.91%, and AJC of 92.91%. Moreover, we achieved an accuracy of 99.87%, DSC of 96.77%, and AJC of 95.59% for lung segmentation. Additionally, the suggested network attained a sensitivity of 100%, 81.02%, 76.45%, and 99.01%, respectively, for critical, severe, moderate, and mild infection severity levels. The proposed model outperformed state-of-the-art techniques for the COVID-19 segmentation and detection cases.
Scalability is one of the most important quality attribute of softwareintensive systems, because it maintains an effective performance parallel to the large fluctuating and sometimes unpredictable workload. In order to achieve scalability, thread pool system (TPS) (which is also known as executor service) has been used extensively as a middleware service in software-intensive systems. TPS optimization is a challenging problem that determines the optimal size of thread pool dynamically on runtime. In case of distributed-TPS (DTPS), another issue is the load balancing b/w available set of TPSs running at backend servers. Existing DTPSs are overloaded either due to an inappropriate TPS optimization strategy at backend servers or improper load balancing scheme that cannot quickly recover an overload. Consequently, the performance of software-intensive system is suffered. Thus, in this paper, we propose a new DTPS that follows the collaborative round robin load balancing that has the effect of a double-edge sword. On the one hand, it effectively performs the load balancing (in case of overload situation) among available TPSs by a fast overload recovery procedure that decelerates the load on the overloaded TPSs up to their capacities and shifts the remaining load towards other gracefully running TPSs. And on the other hand, its robust load deceleration technique which is applied to an overloaded TPS sets an appropriate upper bound of thread pool size, because the pool size in each TPS is kept equal to the request rate on it, hence dynamically optimizes TPS. We evaluated the results of the proposed system against state of the art DTPSs by a clientserver based simulator and found that our system outperformed by sustaining smaller response times.
Scalability is one of the utmost nonfunctional requirement of server applications, because it maintains an effective performance parallel to the large fluctuating and sometimes unpredictable workload. In order to achieve scalability, thread pool system (TPS) has been used extensively as a middleware service in server applications. The size of thread pool is the most significant factor, that affects the overall performance of servers. Determining the optimal size of thread pool dynamically on runtime is a challenging problem. The most widely used and simple method to tackle this problem is to keep the size of thread pool equal to the request rate, i.e., the frequencyoriented thread pool (FOTP). The FOTPs are the most widely used TPSs in the industry, because of the implementation simplicity, the negligible overhead and the capability to use in any system. However, the frequency-based schemes only focused on one aspect of changes in the load, and that is the fluctuations in request rate. The request rate alone is an imperfect knob to scale thread pool. Thus, this paper presents a workload profiling based FOTP, that focuses on request size (service time of request) besides the request rate as a knob to scale thread pool on runtime, because we argue that the combination of both truly represents the load fluctuation in server-side applications. We evaluated the results of the proposed system against state of the art TPS of Oracle Corporation (by a client-server-based simulator) and concluded that our system outperformed in terms of both; the response times and throughput.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.