Synchronization is a widespread phenomenon in the brain. Despite numerous studies, the specific parameter configurations of the synaptic network structure and learning rules needed to achieve robust and enduring synchronization in neurons driven by spike-timing-dependent plasticity (STDP) and temporal networks subject to homeostatic structural plasticity (HSP) rules remain unclear. Here, we bridge this gap by determining the configurations required to achieve high and stable degrees of complete synchronization (CS) and phase synchronization (PS) in time-varying small-world and random neural networks driven by STDP and HSP. In particular, we found that decreasing P (which enhances the strengthening effect of STDP on the average synaptic weight) and increasing F (which speeds up the swapping rate of synapses between neurons) always lead to higher and more stable degrees of CS and PS in small-world and random networks, provided that the network parameters such as the synaptic time delay $$\tau _c$$
τ
c
, the average degree $$\langle k \rangle$$
⟨
k
⟩
, and the rewiring probability $$\beta$$
β
have some appropriate values. When $$\tau _c$$
τ
c
, $$\langle k \rangle$$
⟨
k
⟩
, and $$\beta$$
β
are not fixed at these appropriate values, the degree and stability of CS and PS may increase or decrease when F increases, depending on the network topology. It is also found that the time delay $$\tau _c$$
τ
c
can induce intermittent CS and PS whose occurrence is independent F. Our results could have applications in designing neuromorphic circuits for optimal information processing and transmission via synchronization phenomena.