Tremendous volumes generated by big data applications are starting to overwhelm data centers and networks. Traditional research efforts have determined how to process these vast volumes of data inside datacenters. Nevertheless, slight attention has addressed the increase in power consumption resulting from transferring these gigantic volumes of data from the source to destination (datacenters). An efficient approach to address this challenge is to progressively processing large volumes of data as close to the source as possible and transport the reduced volume of extracted knowledge to the destination. In this article, we examine the impact of processing different big data volumes on network power consumption in a progressive manner from source to datacenters. Accordingly, a noteworthy decrease for data transferred is achieved which results in a generous reduction in network power consumption. We consider different volumes of big data chunks. We introduce a Mixed Integer Linear Programming model (MILP) to optimize the processing locations of these volumes of data and the locations of two datacenters. The results show that serving different big data volumes of uniform distribution yields higher power saving compared to the volumes of chunks with fixed size. Therefore, we obtain an average network power saving of 57%, 48%, and 35% when considering the volumes of 10-220 (uniform) Gb, 110 Gb, and 50 Gb per chunk, respectively, compared to the conventional approach where all these chunks are processed inside datacenters only.