This paper presents a field study on real errors found in space software requirements documents. The goal is to understand and characterize the most frequent types of requirement problems in this critical application domain. To classify the software requirement errors analyzed we initially used a well-known existing taxonomy that was later extended in order to allow a more thorough analysis. The results of the study show a high rate of requirement errors (9.5 errors per each 100 requirements), which is surprising if we consider that the focus of the work is critical embedded software. Besides the characterization of the most frequent types of errors, the paper also proposes a set of operators that define how to inject realistic errors in requirement documents. This may be used in several scenarios, including: evaluating and training reviewers, estimating the number of requirement errors in real specifications, defining checklists for quick requirement verification, and defining benchmarks for requirements specifications.