Recently, MapReduce has been a popular distributed programming framework for solving data-intensive applications. However, a large-scale MapReduce cluster has inevitable machine/node failures and considerable energy consumption. To solve these problems, MapReduce has employed several policies for replicating input data, storing/replicating intermediate data, and re-executing failed tasks. In this study, we concentrate on two typical policies for storing/replicating intermediate data, and derive the job completion reliability (JCR for short) and job energy consumption (JEC for short) of a MapReduce cluster when the two policies are individually employed. The two policies are further analyzed and compared given various scenarios in which jobs with different input data sizes, numbers of reduce tasks, and other parameters are run in a MapReduce cluster with two extreme parallel execution capabilities. From the analytical results, MapReduce managers are able to comprehend how the two policies influence the JCR and JEC of a MapReduce cluster.