Large-scale paper-based examinations (PBEs) in computing education frequently emphasize rote memorization, thereby misaligning instructional objectives with assessment techniques. Such incongruities hinder the preparation of students for real world challenges in both industry and academia by inadequately evaluating higher-order cognitive abilities. Often, educators are deterred from implementing comprehensive skills assessment due to the perceived complexity and resource-intensive grading processes involved.To mitigate these limitations, this paper introduces an exam mode as an integral feature of the open-source learning platform Artemis. Designed for both local and cloud-based deployment, this exam mode incorporates anti-cheating protocols, automates the grading of diverse exercise types, and features double-blind manual grading to ensure assessment integrity. It fosters the evaluation of complex cognitive skills while substantially reducing the administrative load on faculty.This paper substantiates the effectiveness of the Artemis exam mode through widespread institutional adoption, demonstrated by over 50 successful computer-based examinations (CBEs). An in-depth case study involving 1,700 undergraduate software engineering students offers key insights, best practices, and lessons learned. This research not only pioneers the documentation of a secure, scalable, and reliable exam system at an institutional scale but also marks a seminal contribution to modernizing assessment strategies in computing education, with a particular focus on constructive alignment.