Outdoor urban scenes typically contain many planar surfaces, which are useful in tasks such as scene reconstruction, object recognition and navigation. Planar constraints are especially useful when only a single image is available, though the lack of 3D information makes finding them difficult; but a number of cues-such as rectangular shapes, edges, and appearance-can make this possible. We develop a method to determine if regions in an image are planar and find their orientation; motivated by how humans use their prior knowledge to help interpret new scenes, this is done by learning from a training set of examples. In contrast to previous methods which often rely on rectangular structure, this allows our method to generalise to a variety of outdoor environments, without relying on restrictive assumptions such as a Manhattan-like world or a camera aligned with the ground plane. From only one image, our method is able to reliably distinguish planes from non-planes, and estimate their orientation accurately; this is fast and efficient, with application to a real-time system in mind.