“…Using video recordings, eyelid movement is visible in the images and can be assessed using image processing methods. Different algorithms for that purpose are based on either the motion detection derived from differencing two consecutive images (e.g., Bhaskar, Keat, Ranganath, & Venkatesh, 2003;Chau & Betke, 2005;Fogelton & Benesova, 2016;Jiang, Tien, Huang, Zheng, & Atkins, 2013), a second-order derivative method of image differentiations (Gorodnichy, 2003), a state classification (e.g., Choi, Han, & Kim, 2011;Missimer & Betke, 2010;Pan, Sun, & Wu, 2008;Pan, Sun, Wu, & Lao, 2007), an evaluation of the color contrast or amount of visible color of specific eye regions (Cohn, Xiao, Moriyama, Ambadar, & Kanade, 2003;Danisman, Bilasco, Djeraba, & Ihaddadene, 2010;Lee, Lee, & Park, 2010), the distance between landmarks or arcs representing the upper and lower eyelid (Fuhl et al, 2016;Ito, Mita, Kozuka, Nakano, & Yamamoto, 2002;Miyakawa, Takano, & Nakamura, 2004;Moriyama et al, 2002;Sukno, Pavani, Butakoff, & Frangi, 2009), the missing regions of the open eye like the iris or pupil due to their occlusion by the upper and lower eyelid (Hansen & Pece, 2005;Pedrotti, Lei, Dzaack, & Rötting, 2011), or a combination of the described methods (Sirohey, Rosenfeld, & Duric, 2002). Instead of measuring the real distance between the upper and lower eyelid, most of these algorithms use an indirect measure (motion detection, classification, color contrast, missing eye regions) to conclude whether the eye is closed.…”