Our previous study showed that automatically generated attribute grammars were harder to comprehend than manually written attribute grammars, mostly due to unexpected solutions. This study is an internally differentiated replication of the previous experiment, but, unlike the previous one, it focused on testing the influence of code bloat on comprehension correctness and efficiency. While the experiment’s context, design, and measurements were kept mostly the same as in the original experiment, more realistic code bloat examples were introduced. The replicated experiment was conducted with undergraduate students from two universities, showing statistically significant differences in comprehension correctness and efficiency between attribute grammars without code bloat and attribute grammars with code bloat, although the participants perceived attribute grammars with code bloat as simple as attribute grammars without code bloat. On the other hand, there was no statistically significant difference in comprehension correctness and efficiency between automatically generated attribute grammars with possible unexpected solutions and attribute grammars with code bloat, although there was a statistically significant difference in participants’ perspective of simplicity between automatically generated attribute grammars with possible unexpected solutions and attribute grammars with code bloat. The participants perceived attribute grammars with code bloat as significantly simpler than automatically generated attribute grammars.