Tensors or {\em multi-way arrays} are functions of three or more indices
$(i,j,k,\cdots)$ -- similar to matrices (two-way arrays), which are functions
of two indices $(r,c)$ for (row,column). Tensors have a rich history,
stretching over almost a century, and touching upon numerous disciplines; but
they have only recently become ubiquitous in signal and data analytics at the
confluence of signal processing, statistics, data mining and machine learning.
This overview article aims to provide a good starting point for researchers and
practitioners interested in learning about and working with tensors. As such,
it focuses on fundamentals and motivation (using various application examples),
aiming to strike an appropriate balance of breadth {\em and depth} that will
enable someone having taken first graduate courses in matrix algebra and
probability to get started doing research and/or developing tensor algorithms
and software. Some background in applied optimization is useful but not
strictly required. The material covered includes tensor rank and rank
decomposition; basic tensor factorization models and their relationships and
properties (including fairly good coverage of identifiability); broad coverage
of algorithms ranging from alternating optimization to stochastic gradient;
statistical performance analysis; and applications ranging from source
separation to collaborative filtering, mixture and topic modeling,
classification, and multilinear subspace learning.Comment: revised version, overview articl