There are rich structures in off-task neural activity. For example, task related neural codes are thought to be reactivated in a systematic way during rest. This reactivation is hypothesised to reflect a fundamental computation that supports a variety of cognitive functions. Here, we introduce an analysis toolkit (TDLM) for analysing this activity. TDLM combines nonlinear classification and linear temporal modelling to testing for statistical regularities in sequences of neural representations. It is developed using non-invasive neuroimaging data and is designed to take care of confounds and maximize sequence detection ability. The method can be extended to rodent electrophysiological recordings. We outline how TDLM can successfully reveal human replay during rest, based upon non-invasive magnetoencephalography (MEG) measurements, with strong parallels to rodent hippocampal replay. TDLM can therefore advance our understanding of sequential computation and promote a richer convergence between animal and human neuroscience research.TDLM is a general, and flexible, tool for measuring neural sequences. It facilitates crossspecies investigations by linking large-scale measurements in humans to cellular measurements in non-human species. We outline its promise for revealing abstract cognitive processes that extend beyond sensory representation, potentially opened doors for new avenues of research in cognitive science. All code and facilities will be available at https://github.com/yunzheliu/TDLM.
RESULTS
TDLM
Overview of TDLMOur primary goal is to test for temporal structure in neural activity. To achieve this, we would like ideally a method which (1) uncovers regularity in the reactivation of neural activity, (2) tests whether this regularity conforms to a hypothesized structure. Here the structure between neural representation is expressed as their sequential reactivation in time, i.e., sequence. In what follows, we will use the terms "temporal structure" and "sequence" interchangeably.The starting point of TDLM is a set of n time series, each corresponding to a decoded neural representation of a variable of interest. These time series could themselves be obtained in several ways, described in detail in a later section ("Getting the states"). The aim of TDLM is to identify task-related regularities in sequences of these representations off-task.Consider, for example, a task in which participants have been trained such that n=4 distinct sensory cues (A, B, C, and D) appear in a consistent order ( → → → ) (Fig 1a). If we are interested in replay of this sequence during subsequent resting periods, we might want to ask statistical questions of the following form: "Does the existence of a neural representation of A, at time T in the rest period, predict the occurrence of a representation of B at time T+∆ ", and similarly for → and → .In TDLM we ask such questions using a two-step process. First, for each of the n 2 possible pairs of variables Xi and Xj, we find the correlation between the Xi time series and the ∆ -shifted...