This thesis attempts to provide further research on improving automatic tuning of Hadoop and Spark by analysing container performance metrics. Analytics frameworks, for example, Hadoop and Spark, are powerful tools in the study of big data. The parameter values in these analytics frameworks significantly affect the performance of applications. However, it is difficult to select the optimal framework parameter values for every application. Many research teams have proposed methods for the automatic tuning of Hadoop or Spark parameters from different perspectives. Most of the currently existing automatic tuning systems do not support working with different kinds of frameworks and Resource Managers. In addition, research on selecting the best timing to evoke the tuning method is lacking, which is another gap in the study of big data frameworks. Our research aims to fill these gaps. This research introduces novel container performance metrics and proves that these metrics are beneficial in the development of automatic tuning systems. Hadoop and Spark show different patterns in the static and dynamic values of container creation rate, container completion rate, container average response time and relative standard deviation of response-time(RSD). By applying five kinds of machine learning algorithms, container creation rate was found to be the most sensitive metric to identify and classify the workload type at an average accuracy of 83%. RSD can be used to detect workload transitions with an average accuracy of 74%. Our research results will decrease tuning overhead and promote the development of automatic tuning systems.