Abstract-Massive Open Online Courses (MOOCs) are attracting the attention of a huge number of students all around the world. These courses include different types of assignments in order to evaluate the student's knowledge. However, these assignments are designed to allow a straightforward automatic evaluation. But, in this way it is not possible to evaluate some skills that would require answering open-response questions. Peerassessment (the students are asked to assess other assignments), is an effective method to overcome the impossibility of having staff graders for this task. Additionally, students gain a deeper knowledge about the subject of the assignments that they have to read critically. However, the grades given by student-graders must be filtered to avoid bias due to a lack of experience in assessment tasks. There are a number of approaches to do this. In this paper we present a factorization approach that in addition to the grades given by graders is able to incorporate a representation of the contents of the responses given by students using a Vector Space Model of the assignments. So we fill the gap between peer-assessment and content-based methods that use a shallow linguistic processing. The paper includes a report of the results obtained using this approach in a real world dataset collected in 3 universities of Spain, A Coruña, Pablo de Olavide at Sevilla, and Oviedo at Gijón. The scores obtained by the method presented here were compared with those provided by the staff of these universities. We report a considerable improvement whenever we use the content-based approach. In any case, we conclude that there is no evidence that staff grading would have led to more accurate grading outcomes than the assessment produced by our models.