Gene expression data holds the potential to offer deep, physiological insights about the dynamic state of a cell beyond the static coding of the genome alone. I believe that realizing this potential requires specialized machine learning methods capable of using underlying biological structure, but the development of such models is hampered by the lack of an empirical methodological foundation, including published benchmarks and well characterized baselines. In this work, we lay that foundation by profiling a battery of classifiers against newly defined biologically motivated classification tasks on multiple L1000 gene expression datasets. In addition, on our smallest dataset, a privately produced L1000 corpus, we profile per-subject generalizability to provide a novel assessment of performance that is lost in many typical analyses. We compare traditional classifiers, including feed-forward artificial neural networks (FF-ANNs), linear methods, random forests, decision trees, and K nearest neighbor classifiers, as well as graph convolutional neural networks (GCNNs), which augment learning via prior biological domain knowledge. We find GCNNs offer performance improvements given sufficient data, excelling at all tasks on our largest dataset. On smaller datasets, FF-ANNs offer greatest performance. Linear models significantly underperform on all dataset scales, but offer the best per-subject generalizability. Ultimately, these results suggest that structured models such as GCNNs can represent a new direction of focus for the field as our scale of data continues to increase.