Systems perception capabilities are fundamental to make modern interconnected systems smart and, in this context, video coding is certainly a key technology. Video coding standards have been traditionally based on monolithic specifications coupled to a fixed set of profiles, capable of implementing different sub-set of functionalities of the given reference standard. Standards and profiles typically offer different features and enable different trade-offs that make them suitable for different applications and scenarios. Nevertheless, keeping the pace with constantly evolving scenarios and user's needs, for example in terms of offered quality or consumed power, is not that simple, and has led to continuous releases/updates of standards and profiles.Ideally designers should be allowed, at design-time, to select and combine coding tools and profiles to optimally match the given requirements, while guaranteeing that customized applications are still interoperable. Nevertheless, monolithic specifications tend to hide parallelism and the dataflow nature of video coding algorithms that, on the contrary, can be successfully exploited to guarantee efficient implementations and to play with different trade-offs [8].To address these issues, related to codecs design and customization, designers have been called to conceive efficient design-time methodologies