Objective: Noise reduction algorithms in current hearing prostheses lack information about the sound source a user attends to when multiple sources are present. To resolve this issue, they can be complemented with auditory attention decoding (AAD) algorithms, which decode the attention directly from the brain using electroencephalography (EEG) sensors. State-of-theart AAD algorithms employ a stimulus reconstruction approach, in which the envelope of the attended source is reconstructed from the EEG and correlated with the envelopes of the individual sources. This approach, however, performs poorly on short signal segments, while longer segments yield impractically long detection delays when the user switches attention. Methods: We propose decoding the directional focus of attention using filterbank common spatial pattern filters (FB-CSP) as an alternative AAD paradigm, which does not require access to the clean source envelopes. Results: The proposed FB-CSP approach outperforms both the traditional stimulus reconstruction approach, as well as a convolutional neural network approach on the same task. We achieve a high accuracy (80% for 1 s windows and 70% for quasiinstantaneous decisions), which is sufficient to reach minimal expected switch durations below 4 s. We also demonstrate that the method can be used on unlabeled data from an unseen subject and with only a subset of EEG channels located around the ear to emulate a wearable EEG setup. Conclusion: The proposed FB-CSP method provides fast and accurate decoding of the directional focus of auditory attention. Significance: The high accuracy on very short data segments is a major step forward towards practical neuro-steered hearing prostheses.Index Terms-auditory attention decoding, directional focus of attention, brain-computer interface, common spatial pattern filter, electroencephalography, neuro-steered hearing prosthesis