In this article, we describe and discuss the most significant teacher-centric aspects of student response systems (SRS) that we have found to negatively affect students' experience of using SRS in lecture settings. By doing so, we hope to increase teachers' awareness of how they use SRS and how seemingly trivial choices or aspects when using SRS can have a significant negative impact on students' experiences, especially when these aspects are often repeated. We cover areas such as consistency when using SRS, time usage, preparation, the experience level of the teachers with regard to SRS, teacher commitment and attitudes, teacher explanations, and how students fear that voting results can mislead the teacher. The data are based on 3 years of experience in developing and using an online SRS in classroom lectures, and they consist of focused (semistructured) student group interviews, student surveys and personal observations. Keywords: audience response systems; clickers; student attitudes; teaching pitfalls Introduction A student response system (SRS) can be described as an electronic voting system that presents students with a multiple-choice question, often as part of a quiz, to which they will answer with a small handheld device (commonly referred to as a ''clicker''), usually after engaging in peer discussions. Benefits from utilising such a system in lectures include increased student motivation and engagement ( With such a promising record, it is easy to forget that SRS is only a tool that, if not used correctly, can actually be detrimental to the lectures (Barber and Njus 2007). Focusing primarily on the technology with a belief that the technology will automatically improve lectures, instead of focusing on how students think and learn, is the single most important reason for failure when implementing new technology into education (Mayer 2005), and SRS is no exception.There are several publications that give best practice guidelines for using SRS (e.g. Although this seems to be an elaborate list, these negative aspects are mostly briefly mentioned and not described in depth in the literature (for a list of general challenges with SRS, see Kay and LeSage 2009). In-depth studies on these aspects and how they affect students' experience of SRS are, to the authors' knowledge, lacking. To fully understand and appreciate best practice guides on SRS, it is important to understand how and why different aspects of implementation can have a negative impact on students.In this article, we describe and discuss such aspects after 3 years of experience (since 2009) in developing and using an online SRS for modern handheld devices, such as smartphones, at Sør-Trøndelag University College in Norway. We start this article with background information, describing the classes where SRS was used and the implementation choices for the different years of testing. This is followed by a brief description of research methods and a presentation of our results. We conclude this article with a discussion and conclusions.