Temporal Diffusion Ratio (TDR) is a recently proposed dMRI technique (Dell'Acqua, 2019) which provides contrast between areas with restricted diffusion and areas either without restricted diffusion or with length scales too small for characterisation. Hence, it has a potential for mapping pore sizes, in particular large axon diameters or other cellular structures. TDR employs the signal from two dMRI acquisitions obtained with the same, large, b-value but with different diffusion times and gradient settings. TDR is advantageous as it employs standard acquisition sequences, does not make any assumptions on the underlying tissue structure and does not require any model fitting, avoiding issues related to model degeneracy. This work for the first time optimises the TDR diffusion sequences in simulation for a range of different tissues and scanner constraints. We extend the original work (which considers substrates containing cylinders) by additionally considering the TDR signal obtained from spherical structures, representing cell soma in tissue. Our results show that contrasting an acquisition with short gradient duration and short diffusion time with an acquisition with long gradient duration and long diffusion time improves the TDR contrast for a wide range of pore configurations. Additionally, in the presence of Rician noise, computing TDR from a subset (50% or fewer) of the acquired diffusion gradients rather than the entire shell as proposed originally further improves the contrast. In the last part of the work the results are demonstrated experimentally on rat spinal cord. In line with simulations, the experimental data shows that optimised TDR improves the contrast compared to non-optimised TDR. Furthermore, we find a strong correlation between TDR and histology measurements of axon diameter. In conclusion, we find that TDR has great potential and is a very promising alternative (or potentially complement) to model-based approaches for mapping pore sizes and restricted diffusion in general.