In order to produce quality products, companies require new engineering students that have good problem solving, debugging, and analysis skills. Many graduates enter the work force with exceptional development skills, but lack proficiency in test, debugging, and analysis skills. This is in part because academic curricula emphasize development at the expense of teaching software testing as a formal engineering discipline. The majority of curricula today emphasize the initial phases of a development life cycle, namely: requirements gathering, architecture design, and implementation. The skills which are retained in this area of test are often learned ad-hoc while working on solutions for an implementation-oriented course. The lack of formal test education among graduates forces industry to spend substantial resources to properly educate graduates in the art and science of software testing. The contribution of this paper to the literature includes an evaluation of software testing as an industry profession, a survey of current curricula guidelines, a survey of software testing education in practice today, and a discussion of ongoing efforts to advance the status of software testing in academic curricula through a novel, crowd-sourced, industry-expert, approach to software test education.
Leveraging redundant resources is a common means of addressing availability requirements, but it often implies redundant costs as well. At the same time, virtualization technologies promise cost reduction through resource consolidation. Virtualization and high-availability (HA) technologies can be combined to optimize availability while minimizing costs, but merging them properly introduces new challenges. This paper looks at how virtualization technologies and techniques can augment and amplify traditional HA approaches while avoiding potential pitfalls. Special attention is paid to applying HA configurations (such as active/active and active/passive) to virtualized environments, stretching virtual environments across physical machine boundaries, resource-sharing approaches, field experiences, and avoiding potential hazards.
Virtual machine (VM) live migration is a critical feature for managing virtualized environments, enabling dynamic load balancing, consolidation for power management, preparation for planned maintenance, and other management features. However, not all virtual machine live migration is created equal. Variants include memory migration, which relies on shared backend storage between the source and destination of the migration, and storage migration, which migrates storage state as well as memory state. We have developed an automated testing framework that measures important performance characteristics of live migration, including total migration time, the time a VM is unresponsive during migration, and the amount of data transferred over the network during migration. We apply this testing framework and present the results of studying live migration, both memory migration and storage migration, in various virtualization systems including KVM, XenServer, VMware, and Hyper-V. The results provide important data to guide the migration decisions of both system administrators and autonomic cloud management systems.
Researchers studying large-scale questions in hydrology, oceanography, and meteorology can work with existing data through myriad platforms that provide access to remote datasets and render said information in various graphical outputs for interpretation and analysis. A survey of 30 publicly available hydrometeorological data platforms reviews the current state of the art in water-data discovery and visualization. Such platforms best meet the needs of a diverse user community by providing valuable data content, facilitating data exchange, and supporting visual analysis. To provide datasets of value to wider audiences, data providers can emphasize building datasets that are not only voluminous but also proportionally content rich in geographic, temporal, and measurement breadth, through integration of complementary datasets and coordinated data collection programs. In support of efficient data interchange, including software-driven integration of complementary content, data providers should increase adoption of web services to share data, along with machine-readable formats. In addition, this work surveys best practices in visualization and advocates for graphical user interface features that provide the flexibility required to integrate heterogeneous datasets by both novice and expert end users. Features that stand out as particularly useful include comprehensive content filters for customizable data queries, multiparameter plotting, statistical analysis tools, and predictive visualization. Effective combination of comprehensive—as well as voluminous—content, widely implemented standards for efficient data interchange, and appropriate visualization flexibility in the next generation of tools provided as software services will empower a wide user community, spanning laypersons, students, and researchers.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.