“…In past years, several software technologies have been developed for the integration of state-of-the-art collection technologies that manipulate and model log-based error analysis and log data; for example, "MEADEP" [35], "NOW" [36], and "SEC" [37,38]. However, since the log-based investigation is not supported by fully automated procedures, the processing load on most analysis loads is inadequate knowledge of the system.…”
Section: Importance Of Defect Predictionsmentioning
confidence: 99%
“…In addition, an error that activates multiple messages in the log causes considerable effort to use the entries for the same results of the error manifestation. Preprocessing tasks are crucial for accurate error analysis [6,22,27,36].…”
Section: Importance Of Defect Predictionsmentioning
Abstract-The demand for distributed and complex business applications in the enterprise requires error-free and highquality application systems. Unfortunately, most of the developed software contains certain defects which cause failure of a system. Such failures are unacceptable for the development in the critical or sensitive applications. This makes the development of high quality and defect free software extremely important in software development. It is important to better understand and compute the association among software defects and its failures for the effective prediction and elimination of these defects to decline the failure and improve software quality. This paper presents a review of software defects prediction and its prevention approaches for the quality software development. It also focuses a review on the potential and constraints of those mechanisms in quality product development and maintenance.
“…In past years, several software technologies have been developed for the integration of state-of-the-art collection technologies that manipulate and model log-based error analysis and log data; for example, "MEADEP" [35], "NOW" [36], and "SEC" [37,38]. However, since the log-based investigation is not supported by fully automated procedures, the processing load on most analysis loads is inadequate knowledge of the system.…”
Section: Importance Of Defect Predictionsmentioning
confidence: 99%
“…In addition, an error that activates multiple messages in the log causes considerable effort to use the entries for the same results of the error manifestation. Preprocessing tasks are crucial for accurate error analysis [6,22,27,36].…”
Section: Importance Of Defect Predictionsmentioning
Abstract-The demand for distributed and complex business applications in the enterprise requires error-free and highquality application systems. Unfortunately, most of the developed software contains certain defects which cause failure of a system. Such failures are unacceptable for the development in the critical or sensitive applications. This makes the development of high quality and defect free software extremely important in software development. It is important to better understand and compute the association among software defects and its failures for the effective prediction and elimination of these defects to decline the failure and improve software quality. This paper presents a review of software defects prediction and its prevention approaches for the quality software development. It also focuses a review on the potential and constraints of those mechanisms in quality product development and maintenance.
“…Although the reliability of individual workstations has been advancing rapidly using technologies such as error detection mechanisms built inside the processor [l, 21, on the board, and in the cabinet [3], many network problems, such as lack of response or broken links, are still not well understood. The failure data analysis reported in [4] indicates that network-related problems contributed to approximately 40% of system failures observed in distributed environments. This shows that studying the behavior of the network components is essential for understanding how an application running on a networked system will behave in the presence of faults.…”
This paper presents an injection-based approach to analyze dependability of high-speed networks using the Myrinet as a n example testbed. Instead of injecting faults related to network protocols, we injected faults into the host interface component, which performs the actual send and receive operations. The fault model used was a temporary single bit flip in an instruction executing o n the host interface's custom processor, corresponding to a transient fault in the processor itself. Results show that more than 25% of the injected faults resulted in interface failures. Furthermore, we observed fault propagation from a n interface to its host computer or to another interface to which it sent a message. These findings suggest that two important issues for high-speed networking in critical applications are protecting the host computer from errant or malicious interface components and implementing thorough message acceptance test mechanisms to prevent errant messages from propagating faults between interfaces.
“…The common trend is to use tupling with a fixed value for time window, such as 5 minutes [1], [3], [6]- [8], [27], [28], 20 minutes [9], [10], [12], and 60 minutes [12], [13], usually without any tuning (such as, the knee rule) or validation.…”
This paper presents a novel approach to assess time coalescence techniques. These techniques are widely used to reconstruct the failure process of a system and to estimate dependability measurements from its event logs. The approach is based on the use of automatically generated logs, accompanied by the exact knowledge of the ground truth on the failure process. The assessment is conducted by comparing the presumed failure process, reconstructed via coalescence, with the ground truth. We focus on supercomputer logs, due to increasing importance of automatic event log analysis for these systems. Experimental results show how the approach allows to compare different time coalescence techniques and to identify their weaknesses with respect to given system settings. In addition, results revealed an interesting correlation between errors caused by the coalescence and errors in the estimation of dependability measurements.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.