Point cloud data from LiDAR sensors is the currently the basis of most L4 autonomous driving systems. Sharing and storing point clouds will also be important for future applications, such as accident investigation or V2V/V2X networks. Due to the huge volume of data involved, storing point clouds collected over long periods of time and transmitting point clouds in real-time are difficult tasks, making compression an indispensable step before storing or transmitting. Previous streaming point cloud compression methods, such as octree compression or video compression-based approaches, have difficulty compressing this data in real-time into very small volumes with low information loss. To reduce temporal redundancy efficiently and rapidly, in this paper we propose a real-time streaming point cloud data compression method using U-net. By utilizing raw packet data from LiDAR sensors, we can store 3D point cloud information losslessly in a 2D matrix, and convert streaming point cloud data into a video-like format. By designating some frames as reference frames and then using U-net to interpolate the remaining LiDAR frames, we can greatly reduce temporal redundancy. Our use of U-net was inspired by a video interpolation approach employed in another study. Noise in LiDAR data is a big issue which significantly affects network training and compression results. In this paper, we propose a padding strategy to alleviate the negative impact of this noise. As a result of these improvements, our proposed method can outperform octree compression, MPEG-based compression and our previously proposed SLAM-based compression method.