Canonical distance and dissimilarity measures can fail to capture important relationships in high-throughput sequencing datasets since these measurements are unable to represent feature interactions. By learning a dissimilarity using decision tree ensembles, we can avoid this important pitfall. We used 16S rRNA data from the lumen and mucosa of the distal and proximal human colon and the stool of patients suffering from immune-mediated inflammatory diseases and compared how well the Jaccard and Aitchison metrics preserve the pairwise relationships between samples to dissimilarities learned using Random Forests, Extremely Randomized Trees, and LANDMark. We found that dissimilarities learned by unsupervised LANDMark models were better at capturing differences between communities in each set dataset. For example, differences in the microbial communities of colon's distal lumen and mucosa were better reflected using LANDMark dissimilarity (p ≤ 0.001, R2 = 0.476) than using the Jaccard distance (p ≤ 0.001, R2 = 0.313) or Random Forest dissimilarity (p ≤ 0.001, R2 = 0.237). In addition, applying Uniform Manifold Approximation and Projection to dissimilarity matrices and transforming the result using principal components analysis created two-dimensional projections that captured the main axes of variation while also preserving the pairwise distances between samples (eg: ρ = 0.8804, p ≤ 0.001 for the distal colon dissimilarities). Finally, supervised LANDMark models tend to outperform both Random Forest and Extremely Randomized Tree classifiers. Models employing multivariate splits can improve the analysis of complex high-throughput sequencing datasets. The improvements observed in this work likely result from the ability of these models to reduce noise from uninformative features. In an unsupervised setting, LANDMark models can preserve pairwise relationships between samples. When used in a supervised manner, these methods tend to learn a decision boundary that is more reflective of the biological variation within the dataset.