Large scale reservoir simulation is essential to understand various flow processes inside the reservoir. With the advent of high performance computing (HPC), it is now possible to simulate models larger than even one billion cells. Because of its cost effectiveness, Linux clusters are very popular for large scale reservoir simulations. Many large clusters have been built by connecting processors via state-of-the-art high speed networks like Infiniband (IB), which can add considerable cost for the hardware. It is possible to connect multiple computer clusters to build a simulation grid, to simulate giant models, which may be difficult to do on a single cluster because of the size limitation. The network connecting clusters should be capable of supporting the needed data transfer rate to avoid performance degradation. Communication and computational load in a simulation depends on various parameters of the model, including size, underlying physics, mathematical formulation, etc. Our focus in this study is to examine HPC architectures, which may be used for large scale reservoir simulation in a costeffective manner. Our design was made following extensive benchmark studies. We designed the network configuration, which is blocking in design to reduce the cost of hardware associated with the network. Our tests with large models indicate that there is only minor degradation in performance because of the new design, while significantly saving in hardware cost. We followed the Hyperscale design principle in constructing the cluster, eliminating all unnecessary components and keeping only essential ones. In this paper, we present results of our simulation grid and discuss scalability of such simulations. We also discuss benefits of a simulation grid in increasing utilization of hardware resources.