In this paper, we introduce discrete-time linear mean-field games subject to an infinite-horizon discounted-cost optimality criterion. The state space of a generic agent is a compact Borel space. At every time, each agent is randomly coupled with another agent via their dynamics and one-stage cost function, where this randomization is generated via the empirical distribution of their states (i.e., the mean-field term). Therefore, the transition probability and the one-stage cost function of each agent depend linearly on the mean-field term, which is the key distinction between classical mean-field games and linear mean-field games. Under mild assumptions, we show that the policy obtained from infinite population equilibrium is ε(N )-Nash when the number of agents N is sufficiently large, where ε(N ) is an explicit function of N . Then, using the linear programming formulation of MDPs and the linearity of the transition probability in mean-field term, we formulate the game in the infinite population limit as a generalized Nash equilibrium problem (GNEP) and establish an algorithm for computing equilibrium with a convergence guarantee. CONTENTS 1. Introduction. This paper introduces linear mean-field games, which are discrete-time stochastic dynamic games with one-stage cost and transition probability that are linear with respect to the empirical distribution of the states (i.e., mean-field term). Specifically, at each