Source code modifications are often documented with log messages. Such messages are a key component of software maintenance: they can help developers validate changes, locate and triage defects, and understand modifications. However, this documentation can be burdensome to create and can be incomplete or inaccurate.We present an automatic technique for synthesizing succinct human-readable documentation for arbitrary program differences. Our algorithm is based on a combination of symbolic execution and a novel approach to code summarization. The documentation it produces describes the effect of a change on the runtime behavior of a program, including the conditions under which program behavior changes and what the new behavior is.We compare our documentation to 250 human-written log messages from 5 popular open source projects. Employing a human study, we find that our generated documentation is suitable for supplementing or replacing 89% of existing log messages that directly describe a code change.
In this paper, we explore the concept of code readability and investigate its relation to software quality. With data collected from human annotators, we derive associations between a simple set of local code features and human notions of readability. Using those features, we construct an automated readability measure and show that it can be 80% effective, and better than a human on average, at predicting readability judgments. Furthermore, we show that this metric correlates strongly with two traditional measures of software quality, code changes and defect reports. Finally, we discuss the implications of this study on programming language design and engineering practice. For example, our data suggests that comments, in of themselves, are less important than simple blank lines to local judgments of readability.
Exception handling is a powerful and widely-used programming language abstraction for constructing robust software systems. Unfortunately, it introduces an inter-procedural flow of control that can be difficult to reason about. Failure to do so correctly can lead to security vulnerabilities, breaches of API encapsulation, and any number of safety policy violations.We present a fully automated tool that statically infers and characterizes exception-causing conditions in Java programs. Our tool is based on an inter-procedural, contextsensitive analysis. The output of this tool is well-suited for use as human-readable documentation of exceptional conditions.We evaluate the output of our tool by comparing it to over 900 instances of existing exception documentation in almost two million lines of code. We find that the output of our tool is at least as good as existing documentation 85% of the time and is better 25% of the time.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.