We consider the problem of classification of functional data into two groups by linear classifiers based on one-dimensional projections of functions. We reformulate the task to find the best classifier as an optimization problem and solve it by regularization techniques, namely the conjugate gradient method with early stopping, the principal component method and the ridge method. We study the empirical version with finite training samples consisting of incomplete functions observed on different subsets of the domain and show that the optimal, possibly zero, misclassification probability can be achieved in the limit along a possibly nonconvergent empirical regularization path. Being able to work with fragmentary training data we propose a domain extension and selection procedure that finds the best domain beyond the common observation domain of all curves. In a simulation study we compare the different regularization methods and investigate the performance of domain selection. Our methodology is illustrated on a medical data set, where we observe a substantial improvement of classification accuracy due to domain extension.We consider the problem of classification of a functional observation into one of two groups. Classification of functional data is a rich, long-standing topic comprehensively overviewed in Baíllo et al. (2011b). It was recently shown by Delaigle and Hall (2012a) that depending on the relative geometric position of the difference of the group means, representing the signal, and covariance operator, summarizing the structure of the noise, certain classifiers can have zero misclassification probability. This remarkable phenomenon, called perfect classification, is a special property of the infinite-dimensional setting and cannot occur in the multivariate context, unless in degenerate cases. It was demonstrated by Delaigle and Hall (2012a) that a particularly simple class of linear classifiers, based on a carefully chosen one-dimensional projection of the function to