With the rapid development of the network, distributed Web Crawler was introduced for fetching the massive web pages. However, the traditional distributed Web Crawler has disadvantages in load balancing between different nodes. In addition, the number of fetching web pages had not grown up linearly in the case of extended crawling nodes. This paper proposes a distributed web crawler model which runs on the Hadoop platform. The characteristics of Hadoop guarantees the scalability of the crawler model proposed by this paper. At the same time, the crawler model makes good use of HBase to guarantee the storage service of massive web context data. This paper also proposed a method of load balancing which is based on the feedback of crawling nodes. The crawler model has been proved to have good performance in load balancing and node extension.