Objective. We consider the cross-subject decoding problem from local field potential (LFP) signals, where training data collected from the prefrontal cortex (PFC) of a source subject is used to decode intended motor actions in a destination subject. Approach. We propose a novel supervised transfer learning technique, referred to as data centering, which is used to adapt the feature space of the source to the feature space of the destination. The key ingredients of data centering are the transfer functions used to model the deterministic component of the relationship between the source and destination feature spaces. We propose an efficient data-driven estimation approach for linear transfer functions that uses the first and second order moments of the class-conditional distributions. Main result. We apply our data centering technique with linear transfer functions for cross-subject decoding of eye movement intentions in an experiment where two macaque monkeys perform memory-guided visual saccades to one of eight target locations. The results show peak cross-subject decoding performance of 80%, which marks a substantial improvement over random choice decoder. In addition to this, data centering also outperforms standard sampling-based methods in setups with imbalanced training data. Significance. The analyses presented herein demonstrate that the proposed data centering is a viable novel technique for reliable LFP-based cross-subject brain-computer interfacing and neural prostheses.Review of existing work. Despite the challenges imposed by the non-stationary nature of neuronal activity signals, the cross-subject decoding problem in non-invasive, EEG-based BCIs has been addressed and the progress has been reported. To this end, methods from the emerging field of transfer learning have been extensively applied, albeit to varying degrees of success [9]. In its most general, transfer learning refers to a set of data mining and machine learning techniques, procedures and algorithms designed to extrapolate knowledge acquired in a given domain and apply it in different but somewhat related domain [9], [15], [16]. The a priori assumption in transfer learning is that there exist inherent connections, correlations and/or similarities between the domains, and the objective of any transfer learning algorithm is to discover these similarity structures and foster reliable transfer of knowledge across the domains [15]. It should be also noted that transfer learning is often done in a non-parametric, data-driven manner without relying extensively on detailed statistical models; this has proven to be effective in problems characterized with highly non-stationary temporal and/or spatial behaviour.In the context of cross-subject BCIs, which can be seen as one specific example of transfer learning, several techniques for identifying and estimating structural similarities between neurological data collected in different subjects have emerged; they can be organized into several broader categories which we briefly outline and discuss ne...