Abstract-Executing a dataflow program on a parallel platform requires assigning to each buffer a given size so that correct program executions take place without introducing any deadlock. Furthermore, in the case of dynamic dataflow programs, specific buffer size assignments lead to significant differences in the throughput, hence a more appropriate optimization problem is to specify the buffer sizes so that the throughput is maximized and the used resources are minimized. This paper introduces a new heuristic methodology for the buffer dimensioning of dynamic dataflow programs, which is considered as a stage of a more general design space exploration process.
I. INTRODUCTIONLast generation of massive parallel many/multi-core processing platforms have brought a renewed interest in dataflow programming approaches attempting to find more natural methodologies to efficiently exploit the available parallelism. The evolution of processing platforms towards concurrent systems composed of homogeneous or heterogeneous arrays of processors providing massive parallelism has been essentially triggered by the limitations of the switching frequency and the power dissipation of deep submicron CMOS technology. In the meantime, the common practice of software development is still relying on sequential approaches and ad-hoc transformations in concurrent non-portable SW versions. In principle, a dataflow program is defined as a directed graph in which each node represents a computational kernel, called actor, and each edge a first-in first-out (FIFO) lossless interconnection channel, called and often implemented by a memory buffer. The processing part of the actors is encapsulated in the atomic executions (firings) called actions. The communication between actors is permitted only by the exchange of atomic data packets, called tokens, by means of interconnection channels implemented by buffers with, in the abstract description of the program, infinite size. Figure 1 illustrates the structure of a simple dataflow program.A dataflow program can be seen as an high-level description or specification of a processing algorithm which abstracts from aspects of the actual execution on a processing platform. The program is the starting point of the stages of a design flow that generates specific hardware and/or software implementations by removing the abstractions and adding design settings according to specific constraints of the platform and optimization objectives of the design. Hence, a precious feature of a dataflow program is essentially the portability