Identifier names convey useful information about the intended semantics of code. Name-based program analyses use this information, e.g., to detect bugs, to predict types, and to improve the readability of code. At the core of namebased analyses are semantic representations of identifiers, e.g., in the form of learned embeddings. The high-level goal of such a representation is to encode whether two identifiers, e.g., le n and s i z e , are semantically similar. Unfortunately, it is currently unclear to what extent semantic representations match the semantic relatedness and similarity perceived by developers. This paper presents IdBench, the first benchmark for evaluating semantic representations against a ground truth created from thousands of ratings by 500 software developers. We use IdBench to study state-of-the-art embedding techniques proposed for natural language, an embedding technique specifically designed for source code, and lexical string distance functions. Our results show that the effectiveness of semantic representations varies significantly and that the best available embeddings successfully represent semantic relatedness. On the downside, no existing technique provides a satisfactory representation of semantic similarities, among other reasons because identifiers with opposing meanings are incorrectly considered to be similar, which may lead to fatal mistakes, e.g., in a refactoring tool. Studying the strengths and weaknesses of the different techniques shows that they complement each other. As a first step toward exploiting this complementarity, we present an ensemble model that combines existing techniques and that clearly outperforms the best available semantic representation.Index Terms-source code, neural networks, embeddings, identifiers, benchmark I. In t r o d u c t io n Identifier names play an important role in writing, understanding, and maintaining high-quality source code [1]. Because they convey information about the meaning of variables, functions, classes, and other program elements, developers often rely on identifiers to understand code written by themselves and others. Beyond developers, various automated techniques analyze, use, and improve identifier names. For example, identifiers have been used to find programming errors [2 ]-[5], to mine specifications [6 ], to infer types [7], [8 ], to predict the name of a method [9], or to complete partial code using a learned language model [10]. Techniques for