Safety is high up on the health agenda—and rightly so. Providing evidence-based, safe, quality care, first time, is effective and makes efficient use of resources. However, there is one aspect that puzzles me, and that is why we are not better at sharing good practice. This is true between care sectors and specialties as well as within them. Regular readings of Roy Lilley's thought-provoking blogs on NHS management and policy (https://ihm.org.uk/roy-lilley-nhsmanagers/) and the Academy of Fabulous Stuff (https://fabnhsstuff.net/), a website that aims to share best practice, simply makes me scratch my head.
How much time does it take to roll out best practice and why does it take so long? How is it that some changes occur much quicker than others? Change is not fast, but it should not be glacial.
Human factors are defined as:
‘Enhancing clinical performance through an understanding of the effects of teamwork, tasks, equipment, workspace, culture and organisation on human behaviour and abilities and application of that knowledge in clinical settings.’
Human factors are flavour of the month, but why was this not always so? It is people who make a difference, and we are all (including those of us in healthcare) socialised into certain behaviours. Many will remember having to tailor decisions to the wishes (or, dare I say it, whims) of the consultant under whom the woman was booked. No evidence-based practice or clinical guidance then, but instead ‘Mr C likes X and Y but Dr T wants A and B’. We are not as completely beyond this as we would like to think. Thankfully, these are not quite as rigid as in the past, but we must not underestimate their effect.
It has been recognised that by changing this socialisation and by learning together, better team-working can be facilitated (Weaver et al, 2014). This should not be a one-off, but rather should require frequent re-emphasis as, however good initial preparation is, it will come up against the culture in the workplace. We need to acknowledge the level of influence that culture has over us and how it can erase much of what has been demonstrated and learned at university.
Mandatory training is a key risk mitigator in maternity services, but I am puzzled by why there is so much variation in provision. Even allowing for particular local issues that need to be addressed, there are significant differences in the numbers of days allocated to training, as well as how much is multidisciplinary. There are maternity units where most training is uni-professional, and others where 90% is shared. What is the correct balance? In some it is a one-size-fits-all approach, and in others content is tailored to working areas, so that community midwives, for example, have specific input relevant to them. Should there be a template, a minimum number of days or an agreed schedule for specific particular updates? Is this something that is too important to be left to local decision-making and availability of resources?
Guidelines from the National Institute for Health and Care Excellence (NICE) are locally interpreted, which can allow for the exercise of individual influences. I acknowledge that guidance is just that, and that its use does not mean one suspends one's own clinical decision-making; however, it does mean that there should be no place for practices that are harmful, delay healing or cost considerably more than comparable alternatives.
So the question is, can we afford to change how we do things? Or can we afford not to?