Abstract-Hand detection is one of the most explored areas in Egocentric Vision Video Analysis for wearable devices. Current methods are focused on pixel-by-pixel hand segmentation, with the implicit assumption of hand presence in almost all activities. However, this assumption is false in many applications for wearable cameras. Ignoring this fact could affect the whole performance of the device since hand measurements are usually the starting point for higher level inference, or could lead to inefficient use of computational resources and battery power. In this paper we propose a two-level sequential classifier, in which the first level, a hand-detector, deals with the possible presence of hands from a global perspective, and the second level, a hand-segmentator, delineates the hand regions at pixel level in the cases indicated by the first block. The performance of the sequential classifier is stated in probabilistic notation as a combination of both, classifiers allowing to test new handdetectors independently of the type of segmentation and the dataset used in the training stage. Experimental results show a considerable improvement in the detection of true negatives, without compromising the performance of the true positives.