As conferencing tools become an increasingly common feature in students' experience, tutors need to have an understanding of how these tools facilitate the formation and maintenance of collaborative learning communities. Inevitably the pursuit of this understanding requires some form of analysis of the interactions involved. This analysis of the written transcripts, created by students during computer mediated conferencing (CMC), invariably takes the form of a systematic content analysis. For small-scale work the analysis can be undertaken manually but when the volume is large, as might arise from courses delivered wholly online or in a blended learning approach for example, some form of automated content analysis comes into its own. Whether analysed quantitatively or qualitatively, there is much to commend this type of approach by higher education tutors wishing to assess the progress of their students and improve their understanding of how students learn through computer conferencing technology. On the basis that tutors need to have an awareness of the advantages and limitations of such tools, this paper examines the content analysis approaches currently available. This debate generally focuses on the appropriateness of the methodology and the representation of interaction patterns and learning processes. In this paper we propose to extend Rourke et al.'s study to cover a wider range of methodological models.Specifically we examine the merits and demerits of these models as exemplified in a selection of influential conferencing analysis studies.