As Industry 4.0 makes its course into the Chemical Processing Industry (CPI), new challenges emerge that require an adaptation of the Process Analytics toolkit. In particular, two recurring classes of problems arise, motivated by the growing complexity of systems on one hand, and increasing data throughput (i.e., the product of two well-known “V’s” from Big Data: Volume × Velocity) on the other. More specifically, as enabling IT technologies (IoT, smart sensors, etc.) enlarge the focus of analysis from the unit level to the entire plant or even to the supply chain level, the existence of relevant dynamics at multiple scales becomes a common pattern; therefore, multiscale methods are called for and must be applied in order to avoid biased analysis towards a certain scale, compromising the benefits from the balanced exploitation of the information content at all scales. Also, these same enabling technologies currently collect large volumes of data at high-sampling rates, creating a flood of digital information that needs to be properly handled; optimal data aggregation provides an efficient solution to this challenge, leading to the emergence of multi-granularity frameworks. In this article, an overview is presented on multiscale and multi-granularity methods that are likely to play an important role in the future of Process Analytics with respect to several common activities, such as data integration/fusion, de-noising, process monitoring and predictive modelling, among others.