If an edge-node orchestrator can partition Big Data tasks of variable computational complexity between the edge and cloud resources, major reductions in total task completion times can be achieved even at low Wide Area Network (WAN) speeds. The percentage time savings are greater with increasing task computational complexity and higher WAN speeds are required for low-complexity tasks. We demonstrate from numerical simulations that low-complexity tasks can benefit either by task partitioning between an edge node and multiple cloud servers. The orchestrator can also achieve greater time benefits by rerouting Big Data tasks directly to a single cloud resource if the balance of parameters (WAN speed and the ratio between edge and cloud processing speeds) is favourable.