Objective: We reviewed and appraised the methods by which the issue of the learning curve has been addressed during health technology assessment in the past. Method: We performed a systematic review of papers in clinical databases (BIOSIS, CINAHL, Cochrane Library, EMBASE, HealthSTAR, MEDLINE, Science Citation Index, and Social Science Citation Index) using the search term "learning curve." Results: The clinical search retrieved 4,571 abstracts for assessment, of which 559 (12%) published articles were eligible for review. Of these, 272 were judged to have formally assessed a learning curve. The procedures assessed were minimal access (51%), other surgical (41%), and diagnostic (8%). The majority of the studies were case series (95%). Some 47% of studies addressed only individual operator performance and 52% addressed institutional performance. The data were collected prospectively in 40%, retrospectively in 26%, and the method was unclear for 31%. The statistical methods used were simple graphs (44%), splitting the data chronologically and performing a t test or chi-squared test (60%), curve fitting (12%), and other model fitting (5%). Conclusions: Learning curves are rarely considered formally in health technology assessment. Where they are, the reporting of the studies and the statistical methods used are weak. As a minimum, reporting of learning should include the number and experience of the operators and a detailed description of data collection. Improved statistical methods would enhance the assessment of health technologies that require learning.