Hair modeling plays an important role in computer animation, virtual reality, and other applications. This paper proposes an encoder-decoder network, named HAO-CNN, to recover 3D hair strand models from a single image. Specifically, HAO-CNN generates a volumetric vector field (VVF) from the oriented map of hairstyles. However, instead of directly working on the full resolution VVFs, we introduce the adapted O-CNN to predict the adaptive representation of VVFs in order to greatly reduce the memory cost. In addition, we fuse the features from different layers of the encoding stage for both capturing the global structure and being aware of hair filaments. Considering the difficulty of acquiring true three-dimensional (3D) hair models, we augment the dataset with 340 3D hair models by 1,800 hair models via interactive editing using the software and render their oriented maps as training data. Then given a hair photo associated with human head, we segment out the hair region, compute its two-dimensional oriented map using Gabor filter, and feed it into the network to produce a hair volumetric vector field which is then converted into hairline models using an improved VVF-to-strands algorithm. This greatly decreases the time cost of approaches based on volumetric vector fields.