Here we present an analysis of a 12-subject electroencephalographic (EEG) data set in which participants were asked to engage in prolonged, self-paced episodes of guided emotion imagination with eyes closed. Our goal is to correctly predict, given a short EEG segment, whether the participant was imagining a positive respectively negative-valence emotional scenario during the given segment using a predictive model learned via machine learning. The challenge lies in generalizing to novel (i.e., previously unseen) emotion episodes from a wide variety of scenarios including love, awe, frustration, anger, etc. based purely on spontaneous oscillatory EEG activity without stimulus event-locked responses. Using a variant of the FilterBank Common Spatial Pattern algorithm, we achieve an average accuracy of 71.3% correct classification of binary valence rating across 12 different emotional imagery scenarios under rigorous block-wise cross-validation.