Some service providers offer services for which physical devices, terminals and servers are installed at the customer's site, e.g. in a hospital setting where physical terminals are installed in patient rooms and used for communication. These devices work together to provide the service, forming service workflows. Due to a recent trend where parts of the management system for such services is offloaded to cloud environments, such services can no longer be isolated in a private subnet, that is specifically dimensioned for its purpose. Part of the network flows must then pass over a larger section of the customer network, and the network load can increase as new services are added, or when services are upgraded. Because of this, it is important to determine whether there is sufficient network and server capacity in the customer network to add new service workflows before they are deployed, or before an existing service is upgraded, ensuring a sufficient level of quality can be guaranteed.In this article we focus on how the impact of service workflows can be determined, ensuring service workflows do not negatively impact each other's execution. In particular, we determine an impact analysis strategy to evaluate the degree to which a given set of service workflows can be guaranteed in a given network topology. As not all flows are continuously active, the approach is designed to support the sharing of network and server resources using a hierarchically specified resource sharing model. We then evaluate the quality of the resulting solutions using two use cases, and the execution speed of the designed algorithms. For the two evaluation use cases, we find that the developed hierarchical algorithm requires ±42% and ±52% less resources than an approach without resource sharing, without any workflow failures occurring during the simulations.