Abstract-Cloud Storage provide users with abundant storage space and make user friendly for immediate acquiring of data, which is the foundation of all kinds of cloud applications. However, there is a lack of deep studies on how to optimize cloud storage aiming at improvement of data access performance. With the development of storage and computer technology, digital data has occupied more and more space. According to statistics, 60% of these digital data is redundant, and the traditional data compression can only eliminate the intra-file redundancy. The growth in redundant data will continue, unabated. The issue is how to manage this phenomenon, while operating with the assumption that the growth will likely accelerate. In order to solve these problems, Data De-Duplication has been proposed. Many organizations have set up private clouds for best resource utilization. An organization can built private cloud storage with their unused resources for storing their personal data. Since private cloud storage has a limited amount of hardware resources, they need to optimally utilize the space to accommodate maximum data. Data De-Duplication is an effective technique to optimize the utilization of storage space backup by avoiding the redundancy. In this paper, we are going to discuss the flaws in the existing de-duplication methods and introduce new methods for Data De-Duplication. Our proposed method namely Intensive Indexing (I2D) De-duplication which is the enhanced File level de-duplication that provides dynamic space optimization in private cloud storage backup as well as increase the throughput and de-duplication efficiency.