Application Programming Interfaces (APIs) are means of communication between applications, hence they can be seen as user interfaces, just with different kind of users, i.e., software or computers. However, the very first consumers of the APIs are humans, namely programmers. Based on the available documentation and the "ease of use" perception (sometimes led by corporate decisions and/or restrictions) they decide to use or not a specific API. In this paper, we propose a data-driven approach to measure web API usability, expressed through the predicted error rate. Following the reviewed state of the art in API usability, we identify a set of usability attributes, and for each of them we propose indicators that web API providers should refer to when developing usable web APIs. Our focus in this paper is on those indicators that can be quantified using the API logs, which indeed reflect the actual behaviour of programmers. Next, we define metrics for the aforementioned indicators, and exemplify them in our use case, applying them on the logs from the web API of District Health Information System (DHIS2) used at World Health Organization (WHO). Using these metrics as features, we build a classifier model to predict the error rate of API endpoints. Besides finding usability issues, we also drill down into the usage logs and investigate the potential causes of these errors.Index Terms-API usability, API logs, log mining, web API.
Applications typically communicate with each other, accessing and exposing data and features by using Application Programming Interfaces (APIs). Even though API consumers expect APIs to be steady and well established, APIs are prone to continuous changes, experiencing different evolutive phases through their lifecycle. These changes are of different types, caused by different needs and are affecting consumers in different ways. In this paper, we identify and classify the changes that often happen to APIs, and investigate how all these changes are reflected in the documentation, release notes, issue tracker and API usage logs. The analysis of each step of a change, from its implementation to the impact that it has on API consumers, will help us to have a bigger picture of API evolution. Thus, we review the current state of the art in API evolution and, as a result, we define a classification framework considering both the changes that may occur to APIs and the reasons behind them. In addition, we exemplify the framework using a software platform offering a Web API, called District Health Information System (DHIS2), used collaboratively by several departments of World Health Organization (WHO).
The use of web Application Programming Interfaces (WAPIs) has experienced a boost in recent years. Developers (i.e., WAPI consumers) are continuously relying on third-party WAPIs to incorporate certain features into their applications. Consequently, WAPI evolution becomes more challenging in terms of the service provided according to consumers' needs. When deciding on which changes to perform, besides several dynamic business requirements (from the organization whose data are exposed), WAPI providers should take into account the way consumers use the WAPI. While consumers may report various bugs or may request new endpoints, their feedback may be partial and biased (based on the specific endpoints they use). Alternatively, WAPI providers could exploit the interaction between consumers and WAPIs, which is recorded in the WAPI usage logs, generated while consumers access the WAPI. In this direction, this paper presents PatternLens, a tool with the aim of supporting providers in planning the changes by analyzing WAPI usage logs. With the use of process mining techniques, this tool infers from the logs a set of usage patterns (e.g., endpoints that are frequently called one after the other), whose occurrences imply the need for potential changes (e.g., merging the two endpoints). The WAPI providers can accept or reject the suggested patterns, which will be displayed together with informative metrics. These metrics will help providers in the decision-making, by giving them information about the consequences of accepting/rejecting the suggestions.
No abstract
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.