Assessing writing performance commits bias due to interaction between raters and criteria because raters can score more consistently or harshly on some criterions. Therefore I explored how the seven raters assessed three essays in order to seek their bias in their rating task, how their background effect (having teaching writing experience & length of teaching writing) their scoring, and how their perception understanding the scoring rubric. The instruments were three essays, analytical writing rubric, questionairres of raters’ background and perception. I applied Two-Way Anova, One-Way Anova and Hoyt’s Anova to measure the raters’ bias, background and perception in awarding score of writing performance. The raters’ scoring criteria of Content, Organization and Vocabulay (0.195, 0.511, 0.545 ) were respectively found bias. Based on the raters’ background of having experience of teaching writing, the scoring criteria of Mechanics was bias (0.026 0.050). But the length of teaching writing experience did not affect the scoring criteria of Content, Organization, Vocabulary, Language Use and Mechanics, in term of no bias (0.705, 0.663, 0.171, 0.206, 0.090 ≥ 0.050). Based on the raters’ perception questionnaire, they were familiar with the instrument of writing rubric prior to this reseach and agreed that the rubric help them to discriminate among the different score level. They also considered that the rangefinders in the rubric were usefull tools to asign score, and the writing rubric measured some essential elements for effectively teaching and learning writing. They assumed the rubric could be used as a professional development tool to support teaching and learning writing, and finally they were confident in their ability to score using the rubric.