Purpose: I investigated best practices for teaching content auditing within two graduate classes tasked with content auditing websites. I observed their strategies for developing auditing criteria. The graduate students used the audits to implement website redesign. Two research
questions guided this study: 1) How do students create assessment criteria for website content audits? 2)What additional support could help students better determine assessment or rubric criteria to make them specific, and most of all, easily measurable? I focused on how and why students made
auditing decisions. Method: I taught two graduate classes in which the students worked with real clients and live websites. Part of the process the students learned and used was content auditing. I used case study, interviews, and text analysis to empirically investigate student
auditing. Nine participants shared their perspectives, including students and clients. Results: Simple categories with binary criteria made auditing easier; simple categories also made for simpler assessment. Students asked to see more examples of audits. Students in Class 1 misunderstood
the audience, which led to a ripple effect on the resulting web design. Students from both classes were nonetheless able to make incremental improvements to both client websites. Conclusion: Additional training in listening to clients is needed with graduate students who do content
auditing of websites. Discussion of the impact of web content evaluation may be needed to help students discover how to tailor auditing guidelines to their specific clients. Practitioners may need recursive auditing to fully define criteria.