Data communication in cloud-based distributed stream data analytics often involves a collection of parallel and pipelined TCP flows. As the standard TCP congestion control mechanism and its variants are designed for achieving "fairness" among competing flows and are agnostic to the application layer contexts, the bandwidth allocation among a set of TCP flows traversing bottleneck links often leads to sub-optimal application-layer performance measures, e.g., stream processing throughput or average tuple complete latency. Motivated by this and enabled by the rapid development of the software-defined networking (SDN) techniques, in this paper, we re-investigate the design space of the bandwidth allocation problem and propose a cross-layer framework which utilizes the instantaneous information obtained from the application layer and provides on-thefly and dynamic bandwidth adjustment algorithms for assisting the stream analytics applications achieving better performance during the runtime. We implement a prototype cross-layer bandwidth allocation framework based on a popular open-source distributed stream processing platform, Apache Storm, together with the OpenDaylight controller, and carry out extensive experiments with real-world analytical workloads on top of a local cluster consisting of ten workstations interconnected by a SDN-enabled fat-tree like testbed. The experiment results clearly validate the effectiveness and efficiency of our proposed framework and algorithms. Finally, we leverage the proposed cross-layer SDN framework and introduce an exemplary mechanism for bandwidth sharing and performance reasoning among multiple active applications and show a case of a point solution on how to approximate application-level fairness. optimizing network activity is important to improving and delivering real-time responses in these applications. In this context, there has been flurry of research attempts toward optimizing streaming applications. While in large part successful, however, their focus mainly has centered to schedule and provision computation resources of the applications or limited to minimizing traffic across the network. Hence, these solutions have largely overlooked allocation and provision of network bandwidth. As a result, they are either suboptimal in optimizing network transfer [9], [11], [12], or assuming the network with sufficient bandwidth resource [13].In current stream processing frameworks, the share of network bandwidth has left to the mercy of the underlying transport mechanisms (e.g., TCP [14], DCTCP [15]).Nonetheless, such mechanisms are designed mainly for end-to-end data delivery in an application agnostic manner,