Code fragments are an important resource for understanding the Application Programming Interface (API) of software libraries. Many usage scenarios for code fragments require them to be distilled to their essence: for example, when serving as cues to longer documents, for reminding developers of a previously known idiom, or for displaying search results. This dissertation reports on research on shortening, or summarizing, code fragments and makes three main contributions: a set of lessons learned from a case study on a supervised machine learning approach to the generation of code fragment summaries; an empirically grounded catalog of source code summarization practices; and the design, implementation and evaluation of a novel optimization-based summarization technique for code fragments.The case study on the generation of code fragment summaries was based on a supervised machine learning approach that classifies whether a line in a code fragment should be in a summary. We present the lessons learned that were key to the two subsequent parts of the research: the best performing feature set being a combination of syntactic and query-related features, and three limitations on our supervised machine learning approach and the line-based problem formulation. The limitations were in using line as the granularity, obtaining training data with high quality, and using only features that are local to a line without considering dependencies among different parts of the code.Motivated by the limitations of line-based summaries, we studied how humans shorten code fragments to understand the nature of the output of the summarization process. Based on 156 hand-generated summaries obtained i from 16 participants, we analyzed decisions on which content to select and how to present this content in a summary. We elicited a catalog of common summarization practices behind these decisions across the summaries, as well as the rationale behind the practices, using a mix of qualitative and quantitative methods. We found that none of the participants exclusively extracted code verbatim for the summaries. Participants employed many practices to modify the content, by trimming a line, truncating code, aggregating a large amount of code, and refactoring code. Not only were the participants concerned with the main goal of the task to shorten code, but also with whether the summary looked compilable, readable and understandable.With the insights from the machine learning case study and the catalog of summarization practices, we devised a technique to generate summaries constrained in both height and width: given as input a code fragment and a query (a set of keywords), our technique produces a shorter version of the fragment that fits in a two-dimensional space (L lines by W columns) and that captures as much as possible of the essential elements of the original code related to the query, while remaining readable. To generate these summaries, we developed a code summarization tool called Konaila. Konaila maximizes the value of the con...