The cloud computing paradigm utilizes neutralization to isolate data, workloads and network bandwidth in an elastic mode. The providers hence have historically used Virtual Machines (VM) to perform this workload isolation . However these virtual machines require huge resources and are costly to set-up and deploy. Containerization has of late been used as a different approach for neutralization albeit as a Software as a service approach. Docker containerization has been one of the most popular approaches and its a solution where containers all share the same resources and operating system as opposed to virtual machines which use hyper-visor technology to abstract hardware and operating systems. The study uses different cloud vendors to compare docker containers for response time, download time, CPU processing time as well as memory usage against virtual machines. The study did a performance comparison for Virtual Machines and Docker Containers in various cloud providers namely AWS, Google Cloud as well as Microsoft Azure cloud platforms. The dataset used included deep learning big data downloaded from Kaggle website and classified for loan defaulting with Keras and tensorflow python implementation frameworks. The comparisons do prove that docker containers are faster than KVMs and Xen virtual machines. The study also proves that by using Kubernetes framework for scaling the containers, the performance of the docker containers improves compared to the docker and bare metal as well as cloud frameworks.
Facial expression recognition in the field of computer vision and texture synthesis is in two forms namely static image analysis and dynamic video textures. The former involves 2D image texture synthesis and the latter dynamic textures where video sequences are extended into the temporal domain taking into account motion. The spatial domain texture involves image textures comparable to the actual texture and the dynamic texture synthesis involves videos which are given dynamic textures extended in a spatial or temporal domain. Facial actions cause local appearance changes over time, and thus dynamic texture descriptors should inherently be more suitable for facial action detection than their static variants. A video sequence is defined as a spatial temporal collection of texture in the temporal domain where dynamic features are extracted. The paper uses LBP-TOP which is a Local Binary Pattern variant to extract facial expression features from a sequence of video datasets. Gabor Filters are also applied to the feature extraction method. Volume Local B inary Patterns are then used to combine the texture, motion and appearance. A tracker was used to locate the facial image as a point in the deformation space. VLBP and LBP-TOP clearly outperformed the earlier approaches due to inclusion of local processing, robustness to monotonic gray-scale changes, and simple computation. The study used Facial Expressions and Emotions Database(FEED) and CK+ databases. The study for the LBP-TOP and LGBP-TOP achieved bettered percentage recognition rate compared to the static image local binary pattern with a set of 333 sequences from the Cohn-Kanade database.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.