“…Although the proponents argue that this method is an indispensable support tool for traditional evaluative measures (Cronin & Overfelt, 1994; Garfield, 1983a, 1983b; Glanzel, 1996; Koenig, 1982, 1983; Kostoff, 1996; Lawani & Bayer, 1983; Narin, 1976; Narin & Hamilton, 1996; van Raan, 1996, 1997), critics claim that it has some serious problems or limitations that impact its validity, including the following: (1) Citation counts give no clue why a work is being cited; (2) citations are field‐dependent and may be influenced by time, number of publications, access to or knowledge of the existence of needed information, as well as the visibility and/or professional rank of the authors; and (3) citation databases provide credit only to the first author, primarily cover English journal articles published in the United States, are not comprehensive in coverage, and have many technical problems such as synonyms, homonyms, clerical errors, and limited coverage of the literature (MacRoberts & MacRoberts, 1986, 1989, 1996; Seglen, 1992, 1998). Studies that report both the validity of citation counts in research assessments and the positive correlation between them and both peer evaluations and publication counts have been discussed and reviewed by many, including Baird and Oppenheim (1994), Biggs and Bookstein (1988), Cronin and Overfelt (1996), Holmes and Oppenheim (2001), Kostoff (1996), Narin (1976), Narin and Hamilton (1996), Oppenheim (1995), Seng and Willett (1995), and Smith (1981).…”