While some form of evaluation has always been a requirement of development projects, in the media assistance field this has predominantly been limited to very basic modes of counting outputs, such as the number of journalists trained or the number of articles produced on a topic. Few media assistance evaluations manage to provide sound evidence of impacts on governance and social change. So far, most responses to the problem of media assistance impact evaluation collate evaluation methodologies and methods into toolkits. This paper suggests that the problem of impact evaluation of media assistance is understood to be more than a simple issue of methods, and outlines three underlying tensions and challenges that stifle implementation of effective practices in media assistance evaluation. First, there are serious conceptual ambiguities that affect evaluation design. Second, bureaucratic systems and imperatives often drive evaluation practices, which reduces their utility and richness. Third, the search for the ultimate method or toolkit of methods for media assistance evaluation tends to overlook the complex epistemological and political undercurrents in the evaluation discipline, which can lead to methods being used without consideration of the ontological implications. Only if these contextual factors are known and understood can effective evaluations be designed that meets all stakeholders' needs.