Embedded systems used in harsh environments are susceptible to bit-flips, which can cause data flow errors. In order to increase the reliability of embedded systems, numerous data flow error detection techniques have already been developed. It is, however, difficult to identify the best technique to apply, due to differences in the way they are evaluated in current literature.This paper presents an empirical comparative study of seven existing techniques. We measured fault coverage, execution time overhead, and code size overhead. We conclude that soft error detection using software redundancy (SEDSR) and error detection by duplicated instructions (EDDI) have a better trade-off between fault coverage and overheads than software-implemented fault tolerance (SWIFT), critical block duplication (CBD), and overhead reduction (VAR3+). Error detection by diverse data and duplicated instructions (ED 4 I or EDDDDI) and software approach (SA) had better fault coverage at the expense of execution time and code size usage.