Automatic segmentation of lung tissue in thoracic CT scans is useful for diagnosis and treatment planning of pulmonary diseases. Unlike healthy lung tissue that is easily identifiable in CT scans, diseased lung parenchyma is hard to segment automatically due to its higher attenuation, inhomogeneous appearance, and inconsistent texture. We overcome these challenges through a multi-layer machine learning approach that exploits geometric structures both within and outside the lung (e.g., ribs, spine). In the coarsest layer, a set of stable landmarks on the surface of the lung are detected through a hierarchical detection network (HDN) that is trained on hundreds of annotated CT volumes. These landmarks are used to robustly initialize a coarse statistical model of the lung shape. Subsequently, a region-dependent boundary refinement uses a discriminative appearance classifier to refine the surface, and finally a region-driven level set refinement is used to extract the fine scale detail. Through this approach we demonstrate robustness to a variety of lung pathologies.