Registry of this type; over 37 724 individuals have been enrolled to date. One activity of this Registry is the semicentralized pathologic review of tumors from all probands. Given the semicentralized nature of the review, this study was undertaken to determine the reproducibility, source(s) of classification discrepancies and stratagems to circumvent discrepancies for histologic subtyping and grading of invasive breast cancer among the reviewing pathologists. A total of 13 pathologists reviewed 35 invasive breast cancers and classified them by primary and secondary histologic type, Nottingham grade and score. Lymph-vascular space invasion, circumscribed margins, syncytial growth and lymphocytic infiltrate were also evaluated. A training session using a separate set of slides was conducted prior to the study. General agreement, in terms of categoryspecific j's and percent agreement, and accuracy of classification relative to a reference standard were determined. Classification of histologic subtype was most consistent (and accurate) for mucinous carcinoma (j ¼ 1.0), followed by tubular (j ¼ 0.8) and lobular subtypes (j ¼ 0.8). Classification of medullary subtype was moderate (j ¼ 0.4), but additional evaluation of degree of lymphocytic infiltrate, syncytial growth and circumscribed margins identified most cases. Category-specific j's were moderate to good for Nottingham grade (j ¼ 0.5-0.7), with the greatest agreement obtained in categorizing grade I (j ¼ 0.7), and grade III tumors (j ¼ 0.7). A flexible classification strategy that employs individual and combined criteria provides good interobserver agreement for invasive breast cancers with uniform, unambiguous histology and compensates for classification discrepancies in the more histologically ambiguous or heterogeneous cancers. Keywords: interobserver reproducibility; invasive breast cancer; familial breast cancer; breast/ovarian cancer family registryThe reproducibility of the classification and grading of invasive breast cancer and the cause(s) of interobserver disagreement among pathologists have not been adequately evaluated. Prior studies evaluating interobserver concordance in categorizing breast lesions have documented improved diagnostic agreement when the pathologists involved used agreed-upon criteria, 1 but other potential sources of poor interobserver agreement, such as the difficulties in the application of the individual histologic criteria, the individual pathologist's variation in use of these criteria, and most importantly, the ambiguous or borderline and heterogeneous nature of the