Functional magnetic resonance imaging (fMRI) commonly uses gradient-recalled echo (GRE) signals to detect regional hemodynamic variations originating from neural activities. While the spatial localization of activation shows promising applications, indexing temporal response remains a poor mechanism for detecting the timing of neural activity. Particularly, the hemodynamic response may fail to resolve sub-second temporal differences between brain regions because of its signal origin or noise in data, or both. This study aimed at evaluating the performance of latency estimation using different fMRI techniques, with two event-related experiments at 3T. Experiment I evaluated latency variations within the visual cortex and their relationship with contrast-to-noise ratios (CNRs) for GRE, spin echo (SE), and diffusion-weighted SE (DWSE). Experiment II used delayed visual stimuli between two hemifields (delay time 5 0, 250, and 500 ms, respectively) to assess the temporal resolving power of three protocols: GRE TR1000 , GRE TR500 , and SE TR1000 . The results of experiment I showed the earliest latency with DWSE, followed by SE, and then GRE. Latency variations decreased as CNR increased. However, similar variations were found between GRE and SE, when the latter had lower CNR. In experiment II, measured stimulus delays from all conditions were significantly correlated with preset stimulus delays. Inter-subject variation in the measured delay was found to be greatest with GRE TR1000 , followed by GRE TR500 , and the least with SE TR1000 . Conclusively, blood oxygenation level-dependent responses obtained from GRE exhibit greater CNR but no compromised latency variations in the visual cortex. SE is potentially capable of improving the performance of latency estimation, especially for group analysis.