Most of today's cloud applications are delivered by Cloud Service Providers (CSPs) on top of a physical network managed by one or multiple Infrastructure Providers (InPs). This new way of delivering services is impacting InPs' revenues, as InPs are only responsible for transporting data to users. Network Function Virtualization (NFV) was proposed to help InPs gain more flexibility in provisioning new services over their networks, hence achieving lower capital and operational costs, keeping stable revenue margins, and resisting the competition of CSPs (e.g., the "Over-The-Top"players). NFV aims at moving from the traditional approach of network functions running over dedicated hardware (e.g., firewall, NAT, etc.) into virtualized software modules running on top of Commercial Off The Shelf (COTS) equipment. However, deploying NFV in an operational network requires addressing two fundamental problems. The first consists on determining the locations where Virtual Network Functions (VNFs) will be hosted (i.e., VNF placement) and the second on how to properly steer network traffic to traverse the required VNFs in the right order (i.e, routing), thus provisioning network services in the form of Service Function Chains (SFCs). In this work we try to solve both problems focusing our analysis on a metro-regional scenario, where link bandwidth and COTS node processing capacity is inherently limited and where the current trend consists on moving towards a Fixed and Mobile Convergence (FMC) network infrastructure. We propose and compare different heuristic strategies for SFC provisioning, characterized by latency and/or capacity awareness (i.e., able to best exploit latency of links and/or processing capacity of COTS nodes for an effective placement of VNFs) and by the adoption of a load balancing policy for traffic routing, with