We introduce a method for processing unstructured data for machine learning based on an LZ-complexity string distance. Computing the LZ-complexity is inherently a serial data compression process; hence, we introduce a string distance computed by a parallel algorithm that utilizes multiple GPU devices to process unstructured data, which typically exists in large quantities. We use this algorithm to compute a distance matrix representation of the unstructured data that standard learning algorithms can use to learn. Our approach eliminates the need for human-based feature definition or extraction. Except for some simple data reformatting done manually, our proposed approach operates on the original raw data and is fully automatic. The parallel computation of the distance matrix is efficient. It obtains a speed-up factor of 528 in computing the distance matrix between every possible pair of 16 strings of length 1M bytes. We show that for learning time-series classification, relative to the ubiquitous TFIDF data representation, the distancematrix representation yields a higher learning accuracy for most of a broad set of learning algorithms. Thus, the parallel algorithm can be helpful in efficiently and accurately learning from unstructured data.