Extract parallelism from programs is growing important as the number of cores of processors is increasing. Parallelization usually involves splitting a sequential thread, and schedule the split code to run on multiple cores. E.g. Some previous Speculative Multi-Threading research used code block reordering to automatically parallelize a sequential thread on multi-core processors. Although the parallelized code blocks can run on different cores, there may still be some data dependences among them. Therefore such parallelization will introduce data dependences among the cores where the code blocks run, which should be resolved alongside the execution by cross-core data sync. Cross-core data sync is usually expensive. This paper proposes to minimize the cross-core data sync with core affinity aware code block scheduling. Our work is based on an Speculative Multi-Threading (SpMT) approach with code block reordering. We improve it by implementing an affinity aware block scheduling algorithm. We built a simulator to model the SpMT architecture, and conducted experiments with SPEC2006 benchmarks. The data shows that plenty of cross-core data sync could be reduced (e.g. Up to 28.7% for gromacs) by the affinity aware block scheduling. For inter-core register sync delay of 5 cycles, this may suggest 3.73% increase in performance.