MapReduce is an important distributed processing model for large-scale data-intensive applications. As an open-source implementation of MapReduce, Hadoop provides enterprises with a cost-efficient solution for their analytics needs. However, the default HDFS block placement policy assumes that computing nodes in a cluster are homogeneous, and tries to balance load by placing blocks randomly, which is insufficient to address the system's self-adaptability. In this paper, we propose a partition-based hierarchical architecture for Hadoop. Based on this architecture, we take the impact of resource characteristics, e.g. disk space utilization and computing capacity of each node into consideration, and propose an intelligent block placement strategy. In this strategy, three mechanisms are given to guarantee the reliability and scalability of Hadoop. Experiments are conducted to show that the proposed strategies not only make sure the load balancing in the whole cluster and minimize the total data moved, but also significantly improve the performance of MapReduce.