“…SiGML relies on HamNoSys as the underlying representation for manuals (Hanke, 2004), but introduces a set of facial nonmanual specifications, including head orientation, eye gaze, brows, eyelids, nose, and mouth and its implementation uses the maskable morphing approach for synthesis. However, there is no consensus on how best to specify facial nonmanual signals, particularly for the mouth, and other research groups have either developed their own custom specification (Lombardo, Battaglino, Damiaro and Nunnari, 2011) or are using an earlier annotation system such as Signwriting (Krnoul, 2010). Further, none of these efforts have yet specified an approach to generating co-occurring facial nonmanual signals.…”