In this paper, a novel skeleton-based approach to human time-varying mesh (H-TVM) compression is presented. The topic of TVM compression is new and has many challenges, such as handling the lack of obvious mapping of vertices across frames and handling the variable connectivity across frames, while maintaining efficiency, which are the most important ones. Very few works exist in the literature, while not all of the challenges have been addressed yet. In addition, developing an efficient and real-time solution, handling the above, obviously is a difficult task. We attempt to address the H-TVM compression problem inspired from video coding using different types of frames and trying to efficiently remove inter-frame geometric redundancy utilizing the recent advances in human skeleton tracking. The overall approach focuses on compression efficiency, low distortion, and low computation time enabling for realtime transmission of H-TVMs. It efficiently compresses geometry and vertex attributes of TVMs. In addition, this paper is the first to provide an efficient method for connectivity coding of TVMs, by introducing a modification to the state-of-the-art MPEG-4 TFAN algorithm. Experiments are conducted in the MPEG-3DGC TVM database. The method outperforms the stateof-the-art standardized static mesh coder MPEG-4 TFAN at low bit-rates, while remaining competent at high bit-rates. It gives a practical proof of concept that in the combined problem of geometry, connectivity, and vertex attribute coding of TVMs, efficient inter-frame redundancy removal is possible, establishing ground for further improvements. Finally, this paper proposes a method for motion-based coding of H-TVMs that can further enhance the overall experience when H-TVM compression is used in a tele-immersion scenario.