Virtual Fixtures (VFs) provide haptic feedback for teleoperation, typically requiring distinct input modalities for different phases of a task. This often results in vision-and position-based fixtures. Vision-based fixtures, particularly, require the handling of visual uncertainty, as well as target appearance/disappearance for increased flexibility. This creates the need for principled ways to add/remove fixtures, in addition to uncertainty-aware assistance regulation. Moreover, the arbitration of different modalities plays a crucial role in providing an optimal feedback to the user throughout the task.In this paper, we propose a Mixture of Experts (MoE) model that synthesizes visual servoing fixtures, elegantly handling full pose detection uncertainties and teleoperation goals in a unified framework. An arbitration function combining multiple vision-based fixtures arises naturally from the MoE formulation, leveraging uncertainties to modulate fixture stiffness and thus the degree of assistance. The resulting visual servoing fixtures are then fused with position-based fixtures using a Product of Experts (PoE) approach, achieving guidance throughout the complete workspace. Our results indicate that this approach not only permits human operators to accurately insert printed circuit boards (PCBs) but also offers added flexibility and retains the performance level of a baseline with carefully handtuned VFs, without requiring the manual creation of VFs for individual connectors. An exemplary video showcasing our method is