Dynamically scheduled high-level synthesis (HLS) achieves a higher throughput on codes with unpredictable memory accesses compared to statically scheduled HLS. However, the increased throughput comes at the price of increased resource usage and critical path length, resulting in lower clock frequency. The decrease in clock frequency can be significant, often nullifying any throughput improvements over static scheduling. Recent work presented methods for combining static and dynamic scheduling to achieve high throughput circuits with a fast critical path for dynamic codes. However, circuits that require dynamically scheduled memory still suffer from a decreased frequency. This paper fills this gap by presenting a method for achieving dynamically scheduled memory operations in HLS with a high frequency. Dynamic scheduling of memory operations is realized with a load-store queue (LSQ). We present a novel LSQ design adapted to the nature of spatial architectures with aggressive specialization to the target code -a unique opportunity in HLS. Our LSQ design works for both on-chip and off-chip memory and is integrated with a compiler that combines dynamic and static scheduling. We show a method to speculatively allocate addresses to our LSQ, significantly increasing pipeline parallelism in codes that could not benefit from an LSQ before. In stark contrast to traditional load value speculation, our approach adds no overhead on misspeculation. On a set of ten benchmarks, we show that our approach can achieve an up to 10× speedup on average against static HLS, and an up to 4× speedup against dynamic HLS that uses an LSQ from previous work, while also using several times fewer resources and scaling to larger queues.