This paper explores the use of optimistic computation to improve application performance in wide-area distributed environments. We do so by defining a parametric model of optimistic computation and then running sets of parameterized experiments to show where, and to what degree, optimistic computation can produce speed-ups. The model is instantiated as an optimistic workload generator implemented as a parallel MPI code. Sets of experiments are then run using this code on an EmuLab system where the network topology, bandwidth and latency can be experimentally controlled. Hence, the results we obtain are from a real parallel code running over a real network protocol through emulated network conditions. We show that under favorable conditions, many fold speed-ups are possible, and even under moderate conditions, speed-ups can still be realized. While generally optimism provides the best speed-ups when network latency dominates the processing cycle, we have seen cases (with a 90% probability of success) when latency is only 1/6 of the processing cycle yet produces break-even relative performance and 85% of "local" performance. The ultimate goal is to apply this understanding to real-world grid applications that can use optimism to tolerate higher latencies.