Rater bias is a substantial source of error in performance assessment,
where raters rate examinees based on a designed rubric of a specified
scale. Unfortunately, raters' comprehension from the assessment criteria
is dissimilar to each other despite the training sessions of using it,
so that would be undermines the validity and reliability. In this study
both students and teachers played a role as the raters. If such
performance assessment can be shown to be valid and reliable,
participating students in the process of rating could contribute to
lessening the burden on teachers. This study surveyed to find whether
there is any significant difference among these raters in their severity
or leniency in relation to difficulty levels of descriptors in the
rating scale and the ability of the students. The results of Facets
analysis showed that there were bias interactions between assessor
types, and these bias interactions were statistically significant.