As members of a learned profession, engineers are often required to assess and critique the work of others. Preparation for this professional responsibility should be developed during their academictraining, alongside other required skills. This authorproposes that there are generic skills and trainingmethodologies that can be applied to both technical and“soft skill” situations to prepare students for this task.This paper discusses results of a peer assessment exerciseapplied to a “soft-skills” situation.The main objectives of this experiment were to i)develop peer assessment skills in students, ii) maintain orimprove the accuracy of assessments for subjectivematerial, iii) improve students’ skills in the subject area,and iv) potentially reduce marking effort for instructors.The experiment described in this paper involved peerassessment of a short report (3 – 5 pages) required as aterm assignment in a senior course on ethics andprofessionalism. The reports were prepared andsubmitted by groups of two students. Each student wasthen randomly assigned two other reports to assess in adouble-blind fashion, except that no student reviewerreceived their own report. For reference and analysis,each report was also assessed by both the instructor anda Teaching Assistant resulting in approximately sixseparate assessments per report The results were used todetermine a grade for the assignment. The originalassignment rubric was used for all assessments. Inaddition, formative feedback was provided by thereviewers and returned to the authors.The quality of the numerical results was analyzed bycomparing the marks determined by the student assessorsto the reference (instructor, TA) assessments. An averagedifference of 8.5% was observed, and was consideredgenerally acceptable given the subjective nature of thematerial. Student “generosity bias” was also considered,but found to be virtually non-existent with a difference instudent versus reference averages of less than 0.2%.“Outliers” were anticipated, and student assessmentshowed approximately twice the standard deviation of thereference marks. A weighted average was used todetermine the assignment mark, and any marks outside a20.0% band were de-weighted. Approximately 25% ofcases were weight-adjusted, resulting in a maximum markadjustment of 4.1% and an average adjustment of only1.6%.Feedback was solicited from students prior to the peerreview period and at the end of term. Informal feedbackwas solicited prior to the review period regardinginstructions and logistics, and was used to refine the setupfor the peer review phase. Questions on the value of boththe exercise and the feedback provided were included inan end-of-term survey of students about the course, with83% finding the exercise “a bit” or “quite” educationaland 74% finding the peer feedback “a bit” or “quite”helpful.Involving students in this peer evaluation exercise hadgenerally positive outcomes and provided experiencefrom which to improve future implementation of peerassessments to achieve the objectives of this experiment.Recommendations regarding future application include:importance of instructions and setup, student training and rehearsal, and mark determination considerations.