Dataow-style workows o_er a simple, high-level programming model for exible prototyping of scienti_c applications as an attractive alternative to low-level scripting. At the same time, workow management systems (WfMS) may support data parallelism over big datasets by providing scalable, distributed deployment and execution of the workow over a cloud infrastructure. In theory, the combination of these properties makes workows a natural choice for implementing Big Data processing pipelines, common for instance in bioinformatics. In practice, however, correct workow design for parallel Big Data problems can be complex and very time-consuming. In this paper we present our experience in porting a genomics data processing pipeline from an existing scripted implementation deployed on a closed HPC cluster, to a workow-based design deployed on the Microsoft Azure public cloud. We draw two contrasting and general conclusions from this project. On the positive side, we show that our solution based on the e-Science Central WfMS and deployed in the cloud clearly outperforms the original HPC-based implementation achieving up to 2.3x speed-up. However, in order to deliver such performance we describe the importance of optimising the workow deployment model to best suit the characteristics of the cloud computing infrastructure. The main reason for the performance gains was the availability of fast, node-local SSD disks delivered by Dseries Azure VMs combined with the implicit use of local disk resources by e-Science Central workow engines. These conclusions suggest that, on parallel Big Data problems, it is important to couple understanding of the cloud computing architecture and its software stack with simplicity of design, and that further e_orts in automating parallelisation of complex pipelines are required.