In the last decade the ICT community observed a growing popularity of software networking paradigms. This trend consists in moving network applications from static, expensive, hardware equipment (e.g. router, switches, firewalls) towards flexible, cheap pieces of software that are executed on a commodity server. In this context, a server owner may provide the server resources (CPUs, NICs, RAM) for customers, following a Service-Level Agreement (SLA) about clients' requirements. The problem of resource allocation is typically solved by overprovisioning, as the clients' application is opaque to the server owner, and the resource required by clients' applications are often unclear or very difficult to quantify. This paper shows a novel approach that exploits machine learning techniques in order to infer the input traffic load (i.e., the expected network traffic condition) by solely looking at the runtime CPU footprint.