Current activity recognition approaches have achieved a great success due to the advancement in deep learning and the availability of huge public benchmark datasets. These datasets focus on highly distinctive actions involving discriminative body movements, body-object and/or human-human interactions. However, in real-world scenarios, e.g., functional assessment of a rehabilitation task, which requires the capability of differentiating the execution of same activities performed by individuals with different impairments, their recognition accuracy is far from being satisfactory. To address this, we develop Functional-ADL, a challenging novel dataset to take action recognition to a new level. Compared to the existing datasets, Functional-ADL is distinguished in multi-label and impaired-specific executions of different Activities of Daily Living (ADL) to contribute towards vision-based automated assessment and rehabilitation of physically impaired persons. We also propose a novel pose-based two-stream multi-label activity recognition model consisting of a spatial and a temporal stream. The proposed approach significantly outperforms the state-of-the-art by a considerable margin. This new Functional-ADL dataset presents significant challenges for human activity recognition, and we hope this could advance research towards activity understanding and monitoring.