Peer assessment has been used in teaching for many years, and the benefits to the student's learning when this method is used are well known (Ayres, 2015). The focus of this report is on the learning process in relation to the understanding students have of the assessment process and criteria; specifically, whether peer assessment aids their understanding of assessment criteria and marking and so promotes learning in this area. A small-scale qualitative study was carried out to assess this. This article will give details of the study, including a discussion of the findings and their application to practice.
Background and rationale
Within the discipline of midwifery there has to be equal significance given to practice-based and theoretical learning, as stipulated by the Nursing and Midwifery Council (NMC, 2009) and the Department of Health (DH, 2012). In the practice placement, a mentor who has received appropriate training will undertake assessment of the student. There are generally higher marks awarded in practice than theory, all of which combine to give an exit award at the end of the training. The written feedback given on any theoretical assessment may cause a great deal of anxiety and often requires verbal feedback from the lecturer to help the student understand the points made. Educationalists, in this case midwifery lecturers, have to complete a teaching course validated by the NMC, and an NMC mapping tool, to provide evidence that they have the knowledge, skills and experience to meet the criteria required for entry to the NMC register as a teacher (NMC, 2008).
Peer assessment has been evaluated generally in relation to student engagement (Casey et al, 2011; McGarr and Clifford, 2013) and as a valuable part of the student learning process (Topping, 2009; Esfandiari and Myford, 2013). However, the focus for this investigation is the effect that peer assessment has on the way students understand what is expected of them in the assessment itself, and how well they understand the marking system and the process lecturers follow to arrive at a mark. Bloxham (2015) discusses the importance of the student being able to self-assess. They are able to do this if they understand what is expected and can assess their own work against this. The main benefit of peer assessment with this focus was on students' experience as peer assessors for each other's work, not on having their own work assessed.
Topping (2009) notes that involvement in peer assessment can aid development of transferable skills for life, as peer assessment is something that continues long after training is completed. As a midwife, one is embarking on a career path that involves lifelong learning as a professional requirement (NMC, 2015). It follows, therefore, that a training course for midwives must incorporate facilitation of the development of these skills. Midwives are required to use assessment skills in their day-to-day work and this is particularly relevant when assessing students in the practice area. They are assessing whether the student should pass or fail a competency as determined by the NMC (2009) guidelines and stipulations. If they are taught the skills required in assessing one another's work as students, this can translate into a valuable skill as a qualified midwife. Assessment skills are transferable and a valuable resource for future practice (Kearney and Perkins, 2014).
Literature review
A literature review was carried out. The following search terms were used:
These terms were used in varying combinations and a total of 31 articles were returned. These were reviewed in terms of relevance and reduced to 12 articles for consideration. The summary of the findings of this review were that, while peer assessment is effective for student engagement in relation to learning and understanding the assessment process, preparation is essential (Kearney and Perkins, 2014). Elwood and Klenowski (2002) propose that only when assessment is fully understood can it be used to enhance learning. Peer assessment can be a tool to demystify how assessments are marked, and the skills of critical thinking and critiquing of another's work will support and facilitate this transparency. Bloxham and West (2004) studied the effect of peer assessment on the development of students' conceptions of the assessment process. They found that peer assessment had a considerable beneficial effect on the students' understanding of assessment criteria and the ability for them to better understand the assessment feedback.
The literature search found only two articles that included a focus on the impact of peer assessment on the understanding students have of the assessment process; for the most part, the focus was on student engagement. This provided the rationale for focusing mainly on student understanding of the assessment process, an aspect that has had less focus and investigation in the literature examined.
A single cohort of students undertaking a 3-year BSc (Hons) midwifery course were involved in this study. The results of the first summative assignment for their second year were examined. This is also their first assignment to complete at academic level 5. Of the 17 students in the study, three achieved less than 40% (refer) and three achieved 40–44%. Even with the students who did pass with a higher mark, there was a large number of queries about how the assignment was marked and why comments were made. The lecturer team offer a great deal of advice on preparation and students are encouraged to make a plan for each assignment. All members of the teaching team then review this by including all in email correspondence so as to give consistent agreed advice and answers to queries. Many students had a one-to-one meeting with a lecturer to discuss the marking and comments, to aid understanding and acceptance of the marks. This involved reading through the comments made and discussing them to assist with clarification of the meaning. It is acknowledged that this practice may be particular to the university in which the students were enrolled.
It became apparent through discussion that one of the factors contributing to the students' reactions to the assignment results was that they did not fully understand what the marking grid meant, or had issues about the comments assigned throughout their work. The marking grid provides the criteria by which the assignment is marked and graded. Lecturers give constructive advice on how to improve the work, or add comments to highlight good points within the text. The amount of discussion generated by the results of the assignment initiated this investigation into peer assessment as a way to ensure greater understanding of what is expected from each assessment for the students.
Aims
The aim of this study was to advance an understanding of the impact of including peer assessment as part of the learning process on student engagement and understanding of the assessment process. Though student engagement is mentioned, it is in relation to whether or not the use of peer assessment does actually improve the understanding students have of the assessment process and what is expected of them from each assessment. For the purposes of this study the following definition of peer assessment was used:
‘…an arrangement for learners to consider and specify the level, value, or quality of a product or performance of other equal status learners.’ (Topping, 2009: 20)
Method
A small-scale qualitative study was carried out. The focus was on the experience of peer assessment on the students' own perception of how assessments are marked and critiqued; this relates to how we socially construct meaning from our experiences, and so adopts qualitative methods to investigate. A questionnaire with some open-ended questions was chosen. Cohen et al (2011) describe open questions as allowing participants to explain and qualify their answers, giving richer data than would otherwise be possible with the small number of participants involved.
The students had a workshop on infectious diseases, where each student had to give a presentation in class on the same day. This was identified as the best opportunity to implement and evaluate peer assessment and the effect on understanding of the assessment process. Peer assessment is often used in this session and the outcomes of the presentations are formative and have no bearing on the final degree classification; therefore, ethical approval was not considered necessary. The students were given a verbal explanation of the purpose of the peer assessment. The feedback questionnaire was circulated afterwards and completed voluntarily and anonymously by the students.
There were a total of 17 students. Each student, in turn, had to present to the class and then answer any questions raised. An explanation of the peer assessment process and what was expected of them was emailed a week before the session, and then also discussed in class. This gave the opportunity for queries to be raised and answered.
The assessment tool chosen for the students to use for peer assessment (Appendix 1) was circulated on the day of the workshop. It was deliberately worded in such a way that required careful reading and understanding of the criteria for each mark, not just a scale. The assessment tool previously used was a more simple, ordinal scale that required students to circle one word. The new tool was deliberately chosen as it had more lengthy descriptors so would be more like the marking rubric used by midwifery lecturers. The students filled in one assessment sheet per student presentation, and a ‘buddy’ was allocated for each student presenting. The purpose of the buddy was to collate the feedback for that student; this feedback was then given to the student before the end of the day.
Students were then asked to complete feedback in the form of a questionnaire (Appendix 2) to provide information on how they had found the whole process, and whether it had improved their understanding of assessing others and what is involved. It allowed for further comment and information to be given, as well as answers on a five-point scale (strongly agree, agree, don't know, disagree, strongly disagree). This was sent to students 2 days after the workshop, and a follow-up email requesting completed feedback forms was sent 1 week later. The questionnaires were returned anonymously in a designated box at the university.
Results
Of the 17 students involved, only 10 returned their feedback forms. A deadline of 1 month from the date of the workshop was given for these to be returned. Of the 10 who completed the feedback, 100% said it had improved their understanding of the assessment process and that they found the assessment tool easy to use. Comments included:
‘Yes, I think it is a good idea—it makes us think about not only the content in presentations such as these but also all of the other aspects taken into consideration.’ (Student A)
‘I think that it has helped as it has made me understand how hard it can be to assess work.’ (Student B)
‘Yes. It gives you an idea of what the markers are looking for, broken down into detail. Just the little things about making sure you make eye contact, being heard, and frequently reflecting on the objectives of the presentation.’ (Student C)
One student found it particularly difficult, commenting:
‘I definitely wouldn't want to be the one assessing assignments!’ (Student D)
Most responses indicated that students felt more able to be objective as the day progressed, though there was a feeling of not wanting to be unfair to others. In answer to question 3: ‘Were there any factors which affected the objectivity of your feedback?’ the answers were varied.
‘I think there were factors, one being not wanting to dishearten anyone for all the hard work they had put into the presentation so wasn't as harsh because we are all friends!’ (Student E)
‘No, I tried to be as objective as possible for the task.’ (Student F)
‘I found it hard to give negative feedback or criticism to my friends as I did not want to offend them.’ (Student B)
One comment appeared to show the student had fully engaged in the task and reflected on it:
‘I think having to do this has given me a better understanding of how it feels for the tutors to grade our work. Nobody wants to make someone feel bad about his or her work but also you are doing people no favours if you do not point out how to improve. That is a hard job and one that I appreciated more after this day.’ (Student G)
Results for a summative assignment completed before using peer assessment were: three refer results (< 40%) and three results of 40–44%, which equated to 35.3% of the cohort achieving ≤ 44%. Results for a summative assignment completed a short time after the peer assessment were: one refer and one pass mark of 42%, which equates to 11.8% of the cohort achieving ≤ 44%.
While these results are specific to a small study at one university, and only a small number of participants, it is interesting that there was such an improvement in outcomes from the summative assignment before the intervention to the summative assignment completed afterwards. This would indicate that the peer assessment may have been a contributory factor, and further use of peer assessment alongside audit of assessment results would be useful to ascertain whether this is a recurrent theme.
Discussion
This is a small-scale study specific to one university, and therefore cannot be generalised to a wider population. However, the results support the findings of Bloxham and West (2004) and Kearney and Perkins (2014) that peer assessment does enhance a student's understanding of the assessment process. This gives students a greater insight into how work is assessed and what qualities and criteria are required. In turn, this insight can be beneficial if used when assessing their own work before submission. The comments from the students in this study support this, sometimes showing a level of surprise at how difficult it can be to assess another's work.
Rust et al (2003) noted that for students to have the tacit knowledge involved in assessment marking, there had to be some socialisation processes in place. This is supported by Bloxham and West (2004), who found that just supplying written marking criteria was not enough to make this transparent to students. Kearney and Perkins (2014) note that the system they used involved preparation of the students for peer assessment. This served to enhance the process of peer assessment as the students were prepared for the role of assessor. It was noted that it was burdensome to implement, partly due to the time required for preparation of students. However, it may be that once this initial time has been used to prepare students, it would not need to be repeated throughout the rest of the course. Peer assessment could then be used as required.
A limitation of this study is that the students were not given a great deal of time or instruction for the role of assessor. They were given verbal information and instructions 1 week before the task but only received the peer assessment tool on the morning of the workshop. They had previously experienced some peer assessment when grading group presentations; however, this was minimal and most of the students expressed that they were underprepared and unable to take on this task competently. The time given for preparation for the role of peer assessor can affect how well it is received and understood and, as a consequence, the benefits gained (Topping, 2009). Bloxham and West (2004) also moderated the peer marking done, so students not only had the experience of marking assessments but also feedback on how they had fared in this role; this exposed students to the complete marking process by including moderation, and may have encouraged them to pay closer attention to the assessment criteria and marking scheme. This is an area that should be included if peer assessment is to be used as effectively as possible. It would, therefore, be good practice to include moderation when using peer assessment in the future. It was not possible to undertake moderation in this study, owing to the small numbers and only one lecturer being present.
Some students commented in their feedback that they had felt unprepared for the role of assessor but found it easier as the day went on. This implies that experience in the role and familiarity with assessment criteria made a difference to the level of competence felt by the student. Very little preparation for the role of peer assessor was carried out prior to the day of the workshop, which may have had an effect on how prepared the students felt.
After collating the feedback, it may have been beneficial to add a question about whether or not students felt their understanding of assessment criteria, rather than the assessment process, had improved. The assessment process encompasses the experience of having to critically examine the work of a peer and make a judgement on the standards met, but the assessment criteria would have added another dimension. This may have elicited responses as to whether students felt better prepared to be able to critique their own work when preparing for assessment submission.
There was a positive improvement in the results of the summative assignment completed by the students before this intervention and that completed a short time afterwards. While it is not possible to establish a causal relationship between the intervention and these results, it is noteworthy. There are many variables that may have affected the students' performance, including: learning from the feedback from the first assignment; lecturer intervention (extra preparation for the next assignment); and different topics. However, it can provide justification for further investigation as to whether there is a causal relationship between peer assessment, understanding of assessment criteria and processes, and student grades. Casey et al (2011) noted that reviewing a peer's work can facilitate reflection on one's own and ways to improve it. They also noted that some students may be more skilled in assessing and more conscientious than others. It would be pertinent to note that this can, equally, apply to lecturers, and supports the need for moderation to ensure a fair process is followed.
Conclusion
Peer assessment has been used and examined in relation to student engagement. There is also a place for investigation into the effect it has on students' perception and understanding of the assessment process. This small-scale qualitative study has given an indication that peer assessment may also have a beneficial effect on students' understanding of the assessment process and criteria used. Larger studies are required to determine whether this tool can be used to improve student assessment outcomes, particularly in relation to their understanding of assessment criteria. This involves students being more aware of what it is like to critically review a piece of work, thus potentially enhancing their ability to critically review their own work before submitting. It has been shown that preparation for the task is beneficial, but can be time-consuming. It is necessary, therefore, to have a team of motivated lecturers who are willing to put in the time and effort to prepare students adequately for peer assessment.