Creating a navigation system for autonomous companion robots has always been a difficult process, which must contend with a dynamically changing environment, which is populated by a myriad of obstructions and an unspecific number of people, other than the intended person, to follow. This study documents the implementation of an indoor autonomous robot navigation model, based on multi-sensor fusion, using Microsoft Robotics Developer Studio 4 (MRDS). The model relies on a depth camera, a limited array of proximity sensors and an active IR marker tracking system. This allows the robot to lock onto the correct target for human-following, while approximating the best starting direction to begin maneuvering around obstacles for minimum required motion. The system is implemented according to a navigation algorithm that transforms the data from all three types of sensors into tendency arrays and fuses them to determine whether to take a leftward or rightward route around an encountered obstacle. The decision process considers visible short, medium and long-range obstructions and the current position of the target person. The system is implemented using MRDS and its functional test performance is presented over a series of Virtual Simulation Environment scenarios, greenlighting further extensive benchmark simulations.