Classi cation learning is dominated by systems which induce large numbers of small axis-orthogonal decision surfaces. This strongly biases such systems towards particular hypothesis types but there is reason believe t h a t many domains have underlying concepts which do not involve axis orthogonal surfaces. Further, the multiplicity of small decision regions mitigates against any holistic appreciation of the theories produced by these systems, notwithstanding the fact that many of the small regions are individually comprehensible. This thesis investigates modeling concepts as large geometric structures in n-dimensional space. Convex hulls are a superset of the set of axis orthogonal hyperrectangles into which axis orthogonal systems partition the instance space. In consequence, there is reason to believe that convex hulls might provide a more exible and general learning bias than axis orthogonal regions. The formation of convex hulls around a group of points of the same class is shown to be a usable generalisation and is more general than generalisations produced by axis-orthogonal based classi ers, without constructive induction, like decision trees, decision lists and rules. The use of a small number of large hulls as a concept representation is shown to provide classi cation performance which can be better than that of classi ers which use a large number of small fragmentary regions for each concept. A convex hull based classi er, CH1, has been implemented and tested. CH1 can handle categorical and continuous data. Algorithms for two basic generalisation operations on hulls, in ation and facet deletion, are presented. The two operations are shown to improve the accuracy of the classi er and provide moderate classi cation accuracy over a representative selection of typical, largely or wholly continuous valued machine learning tasks. The classi er exhibits superior performance to well-known axis-orthogonal-based classi ers when presented with domains where the underlying decision surfaces are not axis parallel. The strengths and weaknesses of the system are 1 identi ed. One particular advantage is the ability of the system to model domains with approximately the same numberof structures as there are underlying concepts. This leads to the possibility of extraction of higher level mathematical descriptions of the induced concepts, using the techniques of computational geometry, which is not possible from a multiplicity of small regions.