Along with the growing number of applications and end‐users, online network attacks and advanced generations of malware have continuously proliferated. Many studies have addressed the issue of intrusion detection by inspecting aggregated network traffic with no knowledge of the responsible applications/services. Such systems fail to detect intrusions in applications whenever their abnormal traffic fits into the network normality profiles. We address the problem of detecting intrusions in (known) applications when their traffic exhibits anomalies. Building traffic profiles for each separate application is the main challenge of this problem. This paper surveys traffic classification methodologies, within a taxonomy framework, to find out the best possible traffic classification methodologies that could help us answer the following question: given a traffic sample, generated by a particular application, does it conforms to the expected application's traffic? The key requirements for a practical solution are discussed. Then, the referred traffic classification methodologies are assessed in terms of their capabilities, limitations and challenges for being used as a part of this solution. The approaches based on “multiple sub‐flows” have shown the potential to be used for building robust and practical per‐application profiles in near real‐time. An overview of a blend of real‐time approaches is also described. Copyright © 2016 John Wiley & Sons, Ltd.