I and other teaching faculty take pride in our ability to write creative and challenging examination questions. Our self-assessment is based on experience and our knowledge of our subject and discipline. Although our judgment may be correct, it is done usually in the absence of deep knowledge of what is known about the construction of high-quality questions and tests that assess student understanding. A recent study suggests that many of us may be deceiving ourselves.Momsen et al.[1] analyzed nearly 10,000 quiz and examination questions submitted by 50 faculty instructors of various undergraduate biology subjects including molecular biology. They classified each test item according to its cognitive level in the six categories of Bloom's taxonomy [2]. A question at the lowest level, knowledge, received a rating of 1 and successively higher levels of comprehension, application, analysis, synthesis, and evaluation received ratings of 2-6, respectively. Because examination questions are not of equal weight in grading, the authors took a weighted average to assign a Bloom level to the apparent cognitive expectations for students in a course. In addition, the authors evaluated the syllabi associated with the tests and determined the Bloom level that faculty conveyed to their students for the corresponding courses.The results are sobering. Of the 9,713 items classified, 93% were at Bloom's level 1 or 2, 6.7% at level 3, and <1% at level 4 or above that require higher order thinking skills. The mean for the entire set was 1.45, that is, mostly expecting recognition and recall of facts and definitions. When broken down by institution type, universities with doctoral programs scored the poorest with a rating of 1.38, whereas four-year colleges rated highest at 1.95, yet this is still low-level expectation by Bloom criteria. One might attribute this to the larger class size and the heavy reliance on multiple choice tests at universities; however, there was no relationship to class size in the dataset. The data show consistently that faculty expectations, as articulated in course syllabi, were higher than the level of performance they expected on examinations.