Executive SummaryThere exists a wealth of computing education literature devoted to interventions designed to overcome novices' difficulties in learning to write computer programs. However, various studies have shown that the majority of students at the end of a semester of instruction are still unable to write a simple computer program, despite the best efforts of their teachers (Lister et al., 2004;McCracken et al., 2001;Soloway, Bonar, & Ehrlich, 1983). In an effort to address this problem, a workshop titled Building Research in Australasian Computing Education (BRACE) was convened in 2004. BRACE brought together academics interested in learning and applying the techniques and methodologies of action research to the problem of poor student code-writing performance. At this workshop, and at those that followed, participants agreed to use end-ofsemester assessments to try to pinpoint the key steps and difficulties beginners faced in learning introductory programming. Subsequently the group, which has come to be known as the BRACElet project, has continued to meet twice a year in an effort to better understanding of how students learn and grasp various programming concepts.Over the past five years, the BRACElet project has fostered a multi-institutional community of practice amongst academics who teach novice computer programming. For participants, the BRACElet project provides the benefits of external moderation, quality assurance, and benchmarking of examinations. At a BRACElet meeting, attendees decide upon a small common set of question types for use within their own local examinations; it is the subsequent analysis and discussion of locally collected data from various institutions that comprises the BRACElet project's iterative series of action research. By examining across institutions common difficulties students may have on various question types, such as reading, tracing, and coding, the BRACElet project attempts to build theory on how learners acquire programming knowledge.In 2008, academics from Victoria University joined the BRACElet group. After modifying our end of semester examination to contain the BRACElet core question types, the examination was run and data was collected across the cohort of 32 students. The analysis treated student performance on code writing questions as the dependent variable and looked at the relationship between code writing questions and non-code writing questions. Overall, our study found that the combination of tracing and explaining questions, more so than each skill independently correlates highly to code writing skills supporting the