Activity Recognition (AR) is key in context-aware assistive living systems. One challenge in AR is the segmentation of observed sensor events when interleaved or concurrent activities of daily living (ADLs) are performed. Several studies have proposed methods of separating and organising sensor observations and recognise generic ADLs performed in a simple or composite manner. However, little has been explored in semantically distinguishing individual sensor events directly and passing it to the relevant ongoing/new atomic activities. This paper proposes Semiotic theory inspired ontological model, capturing generic knowledge and inhabitant-specific preferences for conducting ADLs to support the segmentation process. A multithreaded decision algorithm and system prototype were developed and evaluated against 30 use case scenarios where each event was simulated at 10sec interval on a machine with i7 2.60GHz CPU, 2 cores and 8GB RAM. The result suggests that all sensor events were adequately segmented with 100% accuracy for single ADL scenarios and minor improvement of 97.8% accuracy for composite ADL scenario. However, the performance has suffered to segment each event with the average classification time of 3971ms and 62183ms for single and composite ADL scenarios, respectively.