“…A compression algorithm is defined as either lossless (i.e, when a compressed file is decompressed, the output matches the original file) or lossy (i.e., when a compressed file is decompressed, the output is epsilonclose to the original data, but not identical). Standard lossless compression algorithms that are being used in many domains [5,4,1,37,15] are based on deriving a token-based mapping to reduce the compressed file size [16,18,7,35,10,3,13]. Yet, there are no published studies attempting to combine these concepts from information theory, explicitly leveraging the various lossless data compression algorithms as feature extractors.…”