Evaluation capacity building (ECB) is still an emerging area of study in the field of evaluation. The purpose of ECB is to assist program practitioners with implementing higher‐quality evaluation; however, we need better tools and resources to effectively assess ECB efforts. Existing measures typically depend on self‐report as opposed to assessing the artifacts of ECB training. Among the few non‐self‐report tools that support the assessment of ECB efforts are Relational Systems Evaluation rubrics designed to evaluate logic models, pathway models, and evaluation plans. These rubrics were first developed and tested several years ago. The purpose of the current study is to update the Relational Systems Evaluation rubrics to reflect current ECB knowledge. The updated rubrics have good to excellent inter‐rater reliability and high internal consistency. The results of this study contribute to the ECB field by providing measurement tools for assessing the quality of ECB artifacts. The rubrics can also be used by organizations and funders who need a systematic approach for assessing (and comparing) the quality of evaluation plans and visual theory of change models (e.g., logic models).