Automated program repair (APR) aims to fix software bugs automatically without human debugging efforts and plays a crucial role in software development and maintenance. Despite the recent significant progress in the number of fixed bugs, APR is still challenged by a long-standing overfitting problem (i.e., the generated patch is plausible but overfitting). Various techniques have thus been proposed to address the overfitting problem. Among them, leveraging deep learning approaches to predict patch correctness automatically is emerging along with the available large-scale patch benchmarks recently. However, existing learning-based techniques mainly rely on manually-designed code features, which can be extremely costly and challenging to construct in practice. In this paper, we propose APPT, a pre-trained model-based automated patch correctness assessment technique, which treats the source code as a sequence of tokens without extra overhead to design a mass of features from different perspectives. In particular, APPT adopts a pre-trained model as the encoder stack, followed by an LSTM stack and a deep learning classifier. Although our idea is general and can be built on various existing pre-trained models, we have implemented APPT based on the BERT model. We conduct an extensive experiment on 1,183 Defects4J patches and the experimental results show that APPT achieves prediction accuracy of 79.0% and recall of 81.3%, outperforming the state-of-the-art technique CACHE by 3.6% and 4.8%. Our additional investigation on 49,694 real-world patches shows that APPT achieves the optimum performance (exceeding 99% in five common metrics for assessing patch classification techniques) compared with existing representation learning techniques. We also prove that adopting advanced pre-trained models can further provide substantial advancement (e.g., GraphCodeBERT-based APPT improves BERT-based APPT by 3.0% and 2.6% in precision and recall, respectively), highlighting the generalizability of APPT.