“…Regarding the optimisation of U ðvÞ C , U ðvÞ I , V C , V ðvÞ I , their Lagrangian functions are constructed, respectively. By applying the Karush Kuhn Tucker (KKT) conditions in each Lagrangian function, the following update rules for each matrix variable can be achieved: Regarding the non-incremental NMF-based algorithms [7,8,11,14,17,18,28] mentioned above, they require the whole dataset when executing. Therefore, they will cost much time learning the common feature of each new incoming multiview instance.…”