Scientific workflows are composed of many fine-grained computational tasks. Generally, a large number of small tasks will slow down the workflow performance due to the scheduling overhead incurs during the execution time. Task clustering is an optimization technique that aggregates multiple small tasks into a large task to reduce the scheduling overhead, and thus it will reduce the overall workflow makespan, i.e. the total execution time taken by the resources to complete the execution of all of the tasks. However, finding an optimal clustering number is a big challenge as it usually requires manual intervention of experienced researchers to define the clustering parameter. In this paper, we proposed the use of reinforcement learning to tackle this problem by automating the discovery of the optimal clustering number for the submitted workflow. First, we model the workflow environment that allows the reinforcement learning agent to interact by determining the cluster number for every round of workflow execution. Then, based on the provenance records after the execution, the workflow environment will analyze the performance data and then determine either a reward or a punishment as the feedback to the reinforcement learning agent. The evaluation experiments are performed using real-world scientific workflow (e.g. Montage in this research), to demonstrate our model capability to identify the optimal cluster number and thus lay the groundwork for the adoption of reinforcement learning in workflow task clustering.