Generating Reconstructable Collaborative Virtual Environments (RCVEs) addresses the spatial heterogeneity challenge in Mixed Reality (MR) remote collaboration. Existing methods partition remote spaces into discrete areas, aligning them with local spaces. However, disparities in structural and semantic configurations hinder continuous avatar movement. This paper proposes a Graph Matching-based approach to constrain the spatial topological relationships among discrete areas during RCVEs alignment. An Artificial Potential Fields-based avatar mapping method, named APFBAM, is introduced to rapidly capture and redirect avatars in heterogeneous spaces. Experiments using reconstructed virtual scenes from real-world data demonstrate the effectiveness of these methods in MR remote collaboration. The results indicate that users perceive minimal impact on their experience despite spatial structural differences, validating the approach's suitability for enabling flexible and scalable MR remote collaboration.