Load-balancing techniques have become a critical function in cloud storage systems that consist of complex heterogeneous networks of nodes with different capacities. However, the convergence rate of any load-balancing algorithm as well as its performance deteriorated as the number of nodes in the system, the diameter of the network and the communication overhead increased. Therefore, this paper presents an approach aims at scaling the system out not up -in other words, allowing the system to be expanded by adding more nodes without the need to increase the power of each node while at the same time increasing the overall performance of the system. Also, our proposal aims at improving the performance by not only considering the parameters that will affect the algorithm performance but also simplifying the structure of the network that will execute the algorithm. Our proposal was evaluated through mathematical analysis as well as computer simulations, and it was compared with the centralized approach and the original diffusion technique. Results show that our solution outperforms them in terms of throughput and response time. Finally, we proved that our proposal converges to the state of equilibrium where the loads in all in-domain nodes are the same since each node receives an amount of load proportional to its capacity. Therefore, we conclude that this approach would have an advantage of being fair, simple and no node is privileged.