-The vulnerability discovery process for a program describes the rate at which the security vulnerabilities are discovered. Being able to predict the vulnerability discovery process allows developers to adequately plan for resource allocation needed to develop patches for them. It also enables the users to assess the security risks. Thus there is a need to develop a model of the discovery process that can predict the number of vulnerabilities that are likely to be discovered in a given time frame. Recent studies have produced vulnerability discovery process models that are suitable for a specific version of a software. However, these models may not accurately estimate the vulnerability discovery rates for a software when we consider successive versions. In this paper, we propose a new approach for quantitatively modeling the vulnerability discovery process, based on shared source code measurements among multiversion software systems. Such a modeling approach can be used for assessing security risk both before and after the release of a version. The applicability of the approach is examined using two open source software systems, viz., Apache HTTP Web server and Mysql DataBase Management System (DBMS). We have examined the relationship between shared code size and shared vulnerabilities between two successive versions. We observe that vulnerabilities continue to be discovered for an older version because part of its code is shared by the newer and more popular later version. Thus, even when the installed base of an older version has declined, vulnerabilities applicable to it are still discovered. Our results are validated using the source code and vulnerability data for two major versions of Apache HTTP Web server and two major versions of Mysql DBMS.
SUMMARYIncreasing the processing speed of automatic fare collection gates (AFCG) is a topic of great importance for supporting passengers who are getting on and off high-density transportation at peak hours. On the other hand, reliability is indispensable for processing tickets, which are a type of noncurrency negotiable instrument. Therefore, a passenger ticket system that provides both high-speed processing and high reliability is required. To increase passenger convenience and reduce maintenance costs, a wireless IC card ticket system is desirable. However, since this system uses wireless communication for processing that is carried out with the fare collection gates, the drop in data during ticket examination is a problem. In this paper, the author proposes an autonomous decentralized technology for satisfying the requirements of high-speed processing and high reliability in a wireless IC card ticket system. IC cards, automatic fare collection gates, and a central server are designed as nodes of an autonomous decentralized system. In particular, to achieve high-speed processing, the author proposes an autonomous decentralized algorithm for fare calculation by IC cards and automatic fare collection gates. To prove the effectiveness of this algorithm, the author creates models and performs comparison tests. Also, as a technology for ensuring high reliability, the author proposes an autonomous decentralized data consistency technology for each subsystem. These technologies were introduced in the Suica system (East Japan Railway Company) in Japan and their effectiveness was proven.
High-load transaction systems such as the IC (Integrated Circuit) Card Ticket System require high performance, high reliability, and service continuity. However, the wireless communications have a problem that higher performance at the gates results in lower reliability in the data. The solution is the Autonomous Decentralized Architecture, in which "IC cards," "terminals," and a "center server" are designed as the autonomous nodes. Based on the architecture, two technologies have been introduced: "Autonomous Decentralized Fare Calculation Algorithm" for high performance and "Autonomous Decentralized Data Consistency Technology" for high reliability. This paper looks into the Multi-layered Data Consistency Technology which assures service continuity, using heterogeneous data fields with various time ranges. Appropriate values are considered through theorization, modeling, and simulation.These conditions are used in the practical "Suica" system at East Japan Railway Company and the system is running satisfactory, without any fatal errors.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.