The recent growth in the wearable sensor market has stimulated new opportunities within the domain of Ambient Assisted Living, providing unique methods of collecting occupant information. This approach leverages contemporary wearable technology, Google Glass, to facilitate a unique first-person view of the occupants immediate environment. Machine vision techniques are employed to determine an occupant’s location via environmental object detection. This method provides additional secondary benefits such as first person tracking within the environment and lack of required sensor interaction to determine occupant location. Object recognition is performed using the Oriented Features from Accelerated Segment Test and Rotated Binary Robust Independent Elementary Features algorithm with a K-Nearest Neighbour matcher to match the saved key-points of the objects to the scene. To validate the approach, an experimental set-up consisting of three ADL routines, each containing at least ten activities, ranging from drinking water to making a meal were considered. Ground truth was obtained from manually annotated video data and the approach was previously benchmarked against a common method of indoor localisation that employs dense sensor placement in order to validate the approach resulting in a recall, precision, and F-measure of 0.82, 0.96, and 0.88 respectively. This paper will go on to assess to the viability of applying the solution to differing environments, both in terms of performance and along with a qualitative analysis on the practical aspects of installing such a system within differing environments.