This article addresses the question of whether the human parsing mechanism (HPM) derives sentence meaning always from representations that are computed algorithmically or whether the HPM sometimes resorts to non-algorithmic strategies that may result in misinterpretations. Misinterpretation effects for noncanonical sentences, such as passives, constitute important evidence in favour of models allowing for nonveridical representations. However, it is unclear whether these effects reflect errors in the mapping of form to meaning, or difficulties specific to the procedure used to test comprehension. We report two experiments combining two different comprehension tasks to address these alternative possibilities. In Experiment 1, participants first judged the plausibility of canonical and noncanonical sentences and then named the agent or patient of the sentence. In Experiment 2, the order of the two tasks was reversed. Both tasks require the correct identification of agent or patient/theme, but differ regarding the complexity of operations required to complete the task successfully. In both experiments, participants made a substantial number of errors with agent/patient naming, even when they had correctly assessed sentence plausibility. We conclude that misinterpretation effects do not indicate parsing errors and therefore cannot serve as evidence for non-algorithmic processing. Our results support models of the HPM that assume algorithmic processing only.