5G and beyond networks are expected to host several heterogeneous verticals with very different requirements in terms of latency, data rate, etc. In the case of human-machine collaboration and cohabitation, very low latency is necessary to permit the seamless remote control of robots and the transmission of sensory experience involving not only audiovisual but also tactile information. In this context, ultra-precise and reliable time synchronization is pivotal. However, existing standards like the Precision Time Protocol (PTP) are proven to be unreliable because of jitters, datagram losses, and complexity. This notifies that the increase of synchronization error from the ideal tens of nanoseconds to hundreds of microseconds is unacceptable in future generation networks. This study provides a novel way for establishing ultra-precise synchronization, which is critical for the growth of converged optical communication networks and the 6G era. We present a unique synchronization mechanism that takes advantage of the unprecedented accuracy of optical lattice clocks and frequency combs in conjunction with electronic components such as Analog-to-Digital Converters (ADCs) and Field-Programmable Gate Arrays (FPGAs). Our method transforms optical pulses into precisely timed electrical signals that may be analyzed and used in sophisticated network systems. This technology assures that data transfer across networks is precisely synchronised and crucial for developing future networks where data must be synchronized over immense distances with minimal latency. Our work demonstrates feasibility and practical applicability using thorough simulations (MATLAB), moving beyond theoretical principles to implementable solutions using existing technical capabilities. The purpose of this paper is to bring femtosecond signal synchronization to nanosecond-level devices. In the simulation, we use an optical signal with a frequency of 1000 THz and downconverted to a lower microwave frequency (100 GHz) — to achieve picosecond-level synchronised signals. The downconverted signal is exposed to white noise and then digitalised. This digital signal is then simulated by reducing the sampling rate to fs = 10 THz and limiting the resolution to b = 8 bits. Finally, high-frequency noises are removed by implementing low-pass filtration using FPGAs.