Figure 1: The above figure shows cutting (first two rows) and washing (last two rows) activities performed by two different subjects, as seen from the head-mounted egocentric cameras. The thesis of this paper is that egocentric cameras are able to capture wearer identifying hand gesture signatures, merely by looking at various activities being performed by the wearers. While it may be difficult for the reader to visually, our deep neural network-based model correctly identifies that activities in rows 1 and 3 have been performed by the same wearer, whereas the other subject has performed activities in rows 2, 4.