Bug reports are crucial software artifacts for both software maintenance researchers and practitioners. A typical use of bug reports by researchers is to evaluate automated software maintenance tools: a large repository of reports is used as input for a tool, and metrics are calculated from the tool's output. But this process is quite different from practitioners, who distinguish between reports written by experts, such as programmers, and reports written by non-experts, such as users. Practitioners recognize that the content of a bug report depends on its author's expert knowledge. In this paper, we present an empirical study of the textual difference between bug reports written by experts and non-experts. We find that a significant difference exists and that this difference has a significant impact on the results from a state-of-the-art feature location tool. Through an additional study, we also found no evidence that these encountered differences were caused by the increased usage of terms from the source code in the expert bug reports. Our recommendation is that researchers evaluate maintenance tools using different sets of bug reports for experts and nonexperts.