Purpose: To improve time-resolved reconstructions by training auto-encoders to learn compact representations of Bloch-simulated signal evolution and inserting the decoder into the forward model. Methods: Building on model-based nonlinear and linear subspace techniques, we train auto-encoders on dictionaries of simulated signal evolution to learn compact, nonlinear, latent representations. The proposed latent signal model framework inserts the decoder portion of the auto-encoder into the forward model and directly reconstructs the latent representation. Latent signal models essentially serve as a proxy for fast and feasible differentiation through the Bloch equations used to simulate signal. This work performs experiments in the context of T 2 -shuffling, gradient echo EPTI, and MPRAGE-shuffling. We compare how efficiently auto-encoders represent signal evolution in comparison to linear subspaces. Simulation and in vivo experiments then evaluate if reducing degrees of freedom by incorporating our proxy for the Bloch equations, the decoder portion of the auto-encoder, into the forward model improves reconstructions in comparison to subspace constraints.Results: An auto-encoder with 1 real latent variable represents single-tissue fast spin echo, EPTI, and MPRAGE signal evolution to within 0.15% normalized RMS error, enabling reconstruction problems with 3 degrees of freedom per voxel (real latent variable + complex scaling) in comparison to linear models with 4-8 degrees of freedom per voxel. In simulated/in vivo T 2 -shuffling and in vivo EPTI experiments, the proposed framework achieves consistent quantitative normalized RMS error improvement over linear approaches. From qualitative evaluation, the proposed approach yields images with reduced blurring and noise amplification in MPRAGE-shuffling experiments.
Conclusion:Directly solving for nonlinear latent representations of signal evolution improves time-resolved MRI reconstructions.