Data fusion refers to the joint analysis of multiple datasets that provide different (e.g., complementary) views of the same task. In general, it can extract more information than separate analyses can. Jointly analyzing electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) measurements has been proved to be highly beneficial to the study of the brain function, mainly because these neuroimaging modalities have complementary spatiotemporal resolution: EEG offers good temporal resolution while fMRI is better in its spatial resolution. The EEG-fMRI fusion methods that have been reported so far ignore the underlying multiway nature of the data in at least one of the modalities and/or rely on very strong assumptions concerning the relation of the respective datasets. For example, in multisubject analysis, it is commonly assumed that the hemodynamic response function is a priori known for all subjects and/or the coupling across corresponding modes is assumed to be exact (hard). In this article, these two limitations are overcome by adopting tensor models for both modalities and by following soft and flexible coupling approaches to implement the multimodal fusion. The obtained results are compared against those of parallel independent component analysis and hard coupling alternatives, with both synthetic and real data (epilepsy and visual oddball paradigm). Our results demonstrate the clear advantage of using soft and flexible coupled tensor decompositions in scenarios that do not conform with the hard coupling assumption.