It is well documented (Heaslip and Scammell, 2012; Bennett and McGowan, 2014) that grading practice is not an easy task and can be open to subjectivity, ambiguity, confusion and grade inflation (Donaldson and Gray, 2012). Midwives have a responsibility to support and educate student midwives in practice (Nursing and Midwifery Council (NMC), 2018a). This may include making a graded assessment of practice (NMC, 2009), but all midwives will need to contribute measurable evidence that focuses on the student's performance during their period of ‘practice supervision’ (NMC, 2018b). This article explores some of the specific outcomes of a three-phase project that led to the development of a practice assessment toolkit. This toolkit may be used as a guide when developing practice assessment documents or to assist those writing evidence of student progress and assessment (Fisher et al, 2019a). A key emphasis of the toolkit is that ‘student assessments are evidence based, robust and objective’ (NMC, 2018b:9).
Background
The UK-wide Lead Midwife for Education United Kingdom Executive is a national group of senior midwife educationalists who represent the UK higher education institutions that deliver midwifery programmes leading to NMC registration. The group was made aware in Spring 2013 of the growing issues attributed to grading practice and the challenges that midwives often faced when making a graded assessment of a student's performance. Lead midwives for education (LMEs) were ideally placed and willing to address the issues at a strategic level to make a difference for practitioners, students and academics alike. Ensuring that students were assessed in a robust and consistent way was seen to be crucial in providing safe and effective care. A working group of interested LMEs was established and embarked on a three-phase project (Figure 1), firstly to undertake a scoping exercise of processes and views on approaches to grading midwifery practice (Fisher et al, 2017a); secondly to identify a set of core principles for grading of midwifery practice (Fisher et al, 2017b), and finally to develop a UK-wide, generic framework for grading midwifery practice (Fisher et al, 2019b). It was felt that this action was timely as the NMC was beginning to review the pre-registration midwifery education standards (NMC, 2009) and the outcomes of the project could therefore provide an evidence base for best practice in terms of assessing the knowledge, skills and behaviour of students in the clinical environment.
Midwives practising in the UK will be aware of the newly published NMC (2018b) standards, which set out what the NMC expects for the learning, support and supervision of students in the practice environment, as well as how students are assessed for theory and practice. These standards replace the role of the mentor and sign-off mentor (NMC, 2008) with a practice supervisor, practice assessor and academic assessor (NMC, 2018b).
The new standards resulted from a major review by the NMC of its education standards to ensure they were future-proofed and fit for purpose (NMC 2018b; NMC 2018c). A practice supervisor supports and supervises midwifery students in the practice learning environment. This may not be a midwife; for example, the practice supervisor may be a paediatric nurse if the student midwife has a placement in the neonatal intensive care unit. However, the practice assessor is a clinical midwife who makes and records objective, evidenced-based assessments on conduct, proficiency and achievement, so it is important that the practice supervisor can document clear and comprehensible evidence that details the student's progress so the practice assessor can make this judgement. An academic assessor is a midwife academic who again makes and records objective, evidenced-based assessments on conduct, proficiency and achievement but also recommends progression (NMC, 2018b). The term ‘sign-off mentor’ is used in this article to reflect the period during which the study was undertaken, but can equally be applied to these new roles and principles.
Phase 1: how was practice assessed?
The first phase comprised a descriptive, evaluative survey, which aimed to determine the variety of ways in which practice was being assessed, the tools that were being used and the views of practitioners using the tools (Fisher et al, 2017a). A response rate of 73% was achieved, meaning 40 of the 55 higher education institutions represented by the participating LMEs. The results confirmed that there was a significant lack of parity when grading practice. Table 1 identifies some of the similarities and differences under six emerging themes.
Themes | Similarities | Differences |
---|---|---|
People | Mentors, sign-off mentors, lecturers |
|
Process | Every university had a process but there was limited similarity |
|
Point in the course | Graded in final week of placement |
|
Package (tool) | Two regional assessment documents |
|
Pass mark | If one element of practice did not pass, the whole assessment failed |
|
Portfolio | Not every university used a portfolio as part of the assessment of practice so limited similarity | Universities used a variety of portfolios, reflective accounts, objective structured clinical examination (OSCE), viva voce and other assessments rather than solely clinical practice to grade students |
NMC: Nursing and Midwifery Council
According to LMEs, clinicians were positive, identifying that their contribution to grading practice made them feel valued and that they had a responsibility as ‘gatekeepers’ to the profession. When awarding a student a grade, LMEs reported that many sign-off mentors felt that grading practice gave them a legitimate way to highlight students’ strengths and weaknesses. Some reported that sign-off mentors were more discerning with practice grades, reserving the higher grades for the outstanding student, while others noted that a grading process meant that sign-off mentors were better able to identify struggling students.
Challenges were also highlighted, such as the length of time it took to consider and write comments congruent with the grade, which sometimes led to lack of consistency between the grade and comments. Participants also commented that some sign-off mentors did not appreciate that terminology of level descriptors reflected the stage of the programme and were hesitant to award a higher grade when students were early on in their training. That said, when asked if there had been any noticeable difference in the students' grade profiles since grading practice had been introduced, half of the respondents (n=20) suggested there had been some degree of grade inflation. This finding concurs with evidence identifying that the majority of grades tend to cluster at the top of the grade scale (Edwards, 2012; Chenery-Morris, 2017). LMEs whose higher education institutions had not seen a recent difference in practice grades had often been grading practice before 2009.
Concluding this phase of the project, it was clear that there were inconsistencies in the interpretation and application of the NMC (2009) standards. The project team acknowledged that complete alignment of documents was not expected, due to innovation and inevitable differences in how higher education institution developed curricula. However, there was a view that some of the inconsistencies could be addressed in order to promote greater parity in how the NMC standards were applied. This would also be an opportunity to develop a set of principles to improve clarity, fairness and robustness for the student and sign-off mentor when practice was being assessed. These considerations fed into phase two of the project.
Phase 2: core principles for grading practice
This phase of the study aimed to identify and agree a set of core principles for grading practice, aiding quality assurance and seeking to address concerns raised about subjectivity and grade inflation. The latter issue continues to be of national interest across all university programmes as the Government seeks to address concerns over the growing number of first-class degrees (Weale, 2018). The project group also wanted to improve assessment reliability by reducing the identified variations (Table 1). This phase of the study used participatory action research methodology (Freire, 1970; Denscombe, 2010). Data were collected via an online survey questionnaire followed by a group discussion with LMEs using a mini-Delphi approach (Green et al, 2007), to achieve consensus on terminology. Details of the design, data collection and results are reported by Fisher et al (2017b). Eleven core principles for grading midwifery practice were agreed (Table 2). The study findings recognised the importance of sign-off mentors being involved in developing the practice assessment tools (Principle 2), and that clear guidance on the assessment tool and the grading criteria should be a requirement (Principle 3). These two core principles have since been identified in the new NMC standards, where all curricula need to be developed in partnership with relevant stakeholders (NMC, 2018c) and objective, evidence-based assessments must provide constructive feedback to encourage professional development (NMC, 2018b:10).
1. The NMC requires clinical practice* to be assessed by clinicians with due regard |
2. Clinicians should be involved in developing and monitoring practice assessment tools/processes |
3. Sign-off mentors should be given clear verbal and written guidance on the assessment tool and criteria for grading the level of performance/competence |
4. The full range of grades available should be encouraged |
5. The correlation between qualitative comments and grade awarded should be clearly demonstrated |
6. A common set of grading criteria comprising qualitative comments that would attract different types of scoring (eg percentage, mark, A–F), depending on institutional requirements and programme preferences, will be developed to enhance standardisation of the measure of competence/performance in midwifery practice |
7. Assessment tools should explicitly state that performance is being objectively measured against marking criteria that include knowledge, skills and personal attributes in the context of professional behaviour, rather than a subjective judgement on the student |
8. Academic staff should provide opportunities to support sign-off mentors in their decision-making about a student's competence/level of achievement |
9. Specific grades or symbols (rather than ‘pass’or ’refer’) should be awarded for clinical practice*, reflecting a continuum of development and meeting requirements of the NMC Standards |
10. If a practice-based module includes elements other than clinical practice*, it is recommended that the credit weighting for these additional elements should not exceed 50% in that module |
11. Quality assurance of grading of practice (ie monitoring of inter-rater reliability) should be undertaken collaboratively by academic staff and clinicians experienced in assessment |
NMC: Nursing and Midwifery Council
Phase 3: a generic framework for grading practice
The final phase of the project brought together findings of the previous two phases to develop a generic framework for grading midwifery practice. Two proposed assessment tools devised by the project team were used: a lexicon framework and rubric. The lexicon framework (Table 3) includes keywords relevant to undergraduate and postgraduate academic levels that may be used to indicate levels of performance in practice. The rubric (Table 4) comprised statements representing levels of performance in practice for undergraduate and postgraduate academic levels, mapped from the lexicon framework. One of each, at academic Level 5, is provided in Tables 3 and 4, with examples of their application (Boxes 1 and 2).
Fail | Pass | Good | Very good | Excellent | Outstanding | |
---|---|---|---|---|---|---|
Knowledge | Keywords: knowledge, evident(ce), understand(ing), inform (ed/ation), theory(etical), awareness, opinion, insight(ful), research | |||||
Skills | Keywords: practice, able/ability, skill, care, act(ion/ive/ively), task, preparation, initiative, decision, competent(ce/ly) | |||||
Attitudes | Keywords: behaviour, manner, compassion(ate), approach(able), philosophy, choice, perception, empathy(etic) | |||||
Other | Keywords: woman, student, family, partner, colleague, NMC, time(s/ly), supervise(ion), standard, require(ment), midwife(ry), workload, support, resources, situation, team, guidance, prompt, guideline, complication, range | |||||
Adjectives | Keywords: professional, direct, clinical, verbal, individual, own, verbal, written | |||||
Unable |
Safe(ly/ty) |
Appropriate(ly) |
Professional(s) |
Wide |
Very |
|
Verbs | Key words: show, document(ation), demonstrate(ion), develop(ment), respond, learn(er/ing), reflect(ive/ion), perform(ance), communicate(ion), lack, need(s), apply(ication), manage(ment), provide, record, work, underpin, seek, make, identify | |||||
Lacks | Begin(ning) |
Participate |
Plans |
Anticipate |
Modifiy(ication) |
|
Adverbs | Occasional(ly) | Consistently | Always |
Fail | Pass | Good | Very good | Excellent | Outstanding | |
---|---|---|---|---|---|---|
Knowledge | Unable to demonstrate sufficient knowledge and understanding to underpin safe practice | Knowledge is limited, but adequate to inform safe practice | Evidence of sound knowledge and understanding to underpin safe practice | Evidence of very good theoretical knowledge which is applied to practice | Demonstrates excellent theoretical knowledge which consistently underpins practice | Outstanding evidence based knowledge is consistently applied to practice |
Skills | Limited ability to perform common clinical midwifery skills and/or unsafe practice is demonstrated | Occasionally demonstrates limitations in some clinical skills, but ability is overall satisfactory | Demonstrates good ability in performance of normal clinical midwifery skills | Evidence of ability to perform effective clinical skills in a range of situations | Skilled in normal clinical practice and is developing the ability to identify complications under supervision | Consistently outstanding performance of normal clinical skills, responding appropriately to risk |
Demonstrates inadequate skills in woman-centred, compassionate care and/or inappropriate communication | Student acts and communicates effectively in providing compassionate care to the woman/family | Able to provide effective care, seeking to meet the woman's needs through informed choice | Student demonstrates very good communication skills to underpin professional care and teamwork | Demonstrates evidence of excellent professional communication skills and anticipation of needs | Consistently cares for women at a high standard, demonstrating outstanding communication and team-working skills | |
Attitudes | Evidence of lack of insight in the student's understanding of professional behaviour | Student demonstrates appropriate professional attitudes | Student clearly demonstrates a professional approach and compassionate manner | Student's behaviour and approach show evidence of appropriate adaptability | Student demonstrates sensitivity to individual situations, showing a high level of insight | Student consistently demonstrates sensitivity and empathy in complex situations |
Student demonstrates a poor attitude towards guidance and feedback | Student responds appropriately to guidance and feedback | Student uses initiative to self-assess and seek appropriate support | Student is competent in reflective practice | Student critically evaluates their own learning and development | Student consistently analyses own performance and rationalises modifications | |
Under minimal supervision | Does not achieve all the NMC standards/requirements | Achieves all the NMC standards/requirements | Achieves all the NMC standards/requirements well | Very good achievement of all the NMC standards/requirements | Excellent achievement of all the NMC standards/requirements | Outstanding achievement of all the NMC standards/requirements |
Key: high frequency; medium frequency
Box 2 shows how these phrases correspond with an assessment of practice in the community (Jade) and on the antenatal ward (Lizi), using colours to match comments with the rubric
Example 1: Johan demonstrates limited knowledge; however when asked, he can explain the rationale for the care he is giving using evidence from NICE. He is unable to prioritise his workload and needs direct supervision at all times. He is professional in his interactions with women and their families but inconsistent in recording his findings. |
Example 2: Estefania can plan and prioritise her workload; when the activity is high she is proactive in anticipating the requests of women for discharge, demonstrating awareness of the complex nature of maternity care. Her documentation is always completed to a high standard. |
For a second-year student at level 5, Johan would refer or ‘fail’ in practice, whereas Estefania would be awarded ‘excellent’. |
Reports on findings from this final phase (Fisher et al, 2019a; 2019b) have shown that the majority of feedback received from clinicians was positive. It was identified that the lexicon framework could be used as the primary tool for grading practice particularly when it came to writing evidence, with some suggesting it would enable more transparent and fairer grading. Students also responded positively, remarking that they could use the tools to self-assess their own practice. Areas for improvement included simplification of language and provision of examples to aid clarification. Feedback on the rubrics suggested they could aid consistency of grading, even if the assessor had not worked predominantly with the student (as will be the case with the new NMC standards), and there was scope for transferability across professional programmes. Findings strongly supported introduction of a national assessment tool in both midwifery and nursing, and many regions are working to develop these.
It was clear from the final phase of the study that learning was seen as important, that both students and sign-off mentors needed to understand and recognise achievement in practice, and that grading was only a small part of this. Therefore, providing feedback to students on their strengths and areas to develop in a comprehensive and easily accessible format should be the main focus, rather than the grade.
Conclusion
The initial aim of the project was to understand the similarities and differences in approaches to grading practice among higher education institutions across the UK, and identify if there could be a generic approach to aid consistency of assessment. The three-phase project provided the evidence needed to develop a practice assessment toolkit to ensure that student assessments are evidence-based, robust and objective. The development of the toolkit is timely due to the NMC's publication of the standards for student supervision and assessment (NMC, 2018b), and so has particular relevance to practice supervisors when writing evidence to reflect a students' performance that can be used by the assessor.
The practice assessment toolkit can be found on the project website (Fisher et al, 2019a). This includes an explanation of how it can be used, levels of performance that may be relevant in a range of higher education institutions, word clouds to provide visual representation of terms and the modified lexicon frameworks and rubrics.