Feeds:
Posts
Comments

The discussion points for this week are as follows:

1. This paper is a retrospective cohort study – what is the place of observational studies in influencing or changing clinical practice?

2. Endpoints measured – where they robust enough to show that beta blockers are safe in COPD in this patient population?

3. The paper used a database of patients in one geographical area. Should we be trying to build up the links needed to produce this kind of data across the UK more generally?

4. Is there a need for prospective research into whether beta blockers are safe in patients with COPD?

Advertisements

Beta blockers are widely prescribed for a range of conditions and are now widely used in the management of cardiovascular disease. Patients with chronic obstructive pulmonary disease (COPD) often have concurrent co-morbidities including cardiovascular disease. However there has been concerns regarding the prescription  β-blockers in these patients due to worries about the effect it may have on their respiratory function:

  • evidence that the use of  β-blockers in patients with COPD may lead to a reduction in their lung function (by reducing their FEV1 – Forced expiratory volume in one second)
  • β-blockers may increase airway hyperresponsiveness

One of the mainstays of treatment of COPD is the use of beta-agonists and there have been concerns that  β-blockers may lead to inhibition of the bronchodilator response to these drugs.

As such there has been some reluctance to prescribe  β-blockers in these patients. This paper published in the BMJ looked at the use of  β-blockers in patients with COPD to assess the effect on mortality, hospital admissions and exacerbations when used in combination with established therapy for COPD.

This was a retrospective cohort study which identified cases from a disease-specific database in Tayside which is used by GPs and secondary care respiratory physicians. All patients fulfil the GOLD guidelines for diagnosis of COPD and data on these patients was collected by respiratory nurses at yearly visits. The authors then identified patients who had an admission to hospital due to COPD and also gathered data on the prescription of respiratory and cardiovascular drugs and on deaths from the general register.

The main outcome measures were hazard ratios from all cause mortality, emergency oral corticosteriod use (use to treat exacerbations of COPD) and respiratory related hospital admissions. In these patients 88% of the β-blockers used were cardioselective.

The results – this paper showed a 22% reduction in all cause mortality in patients prescribed  β-blockers. There was a reduction in the adjusted hazard ratio for patients prescribed β-blockers with standard treatment for COPD compared to those who weren’t (0.28 vs 0.43). The paper also showed a reduction in oral corticosteriod use and hospital admission. There was no adverse effect on lung function detected at all stages of the stepwise treatment approach to COPD.

The authors of this paper concluded that:

 β blockers may reduce mortality and COPD exacerbations when added to established inhaled stepwise therapy for COPD, independently of overt cardiovascular disease and cardiac drugs, and without adverse effects on pulmonary function

A list of discussion points will be posted shortly. Thank you to @amcunningham for suggesting this paper.

Apologies for the delay in posting the summary of last Sunday’s discussion. A week of nights on call in A&E didn’t leave much time for anything but sleeping.  The summary will be posted as soon as possible and I will tweet a link as soon as I do this. Thank you all for continuing to join in the journal club discussions and I am looking forward to tomorrow night’s discussion already.

The NEJM paper published in 2009 has had an impact worldwide with the introduction of surgical checklists in over 3000 hospitals. This paper highlighted an important patient safety issue and aimed to tackle this with a relatively simple intervention. The discussion points below are meant to be a broad starting point for the evening, I hope that in particular the methodology of the paper will be discussed in detail.

1. This study ran for less than a year in eight healthcare settings and there have been many criticisms made of the methodology of the paper (see this blogpost & this letters page for examples of the criticisms). Is this adequate enough to support the widespread implementation of the checklist purely based on this paper?

 2. In the discussion of the paper the authors mention the Hawthorn effect as a possible mechanism of improvement, i.e. an improvement in performance due to the subject’s knowledge of being observed. However this has also been raised as a flaw in the study, the fact that the participants knew they were in a trial could have lead to the improvements shown rather than it being due to the checklist. Does this reduce the validity of the study and its findings?

 3. The checklist is a relatively simple intervention, is there a risk that this could become a tick-box exercise rather than being given due care and attention?

 4. In a letter responding to the paper members of NCEPOD stated that they supported the initiative but were concerned that the implied decrease in the perioperative rate of death was unlikely to be as great in the UK as reported in the paper. Does this make the study any less relevant to practice in developed countries?

If there is time I would also like to discuss how the paper is relevant to practice in less developed countries. Thank you to @fidouglas, @amcunningham & @assidens for their help

A Surgical Safety Checklist to Reduce Morbidity and Mortality in a Global Population – Haynes et al for the WHO Safe Surgery Saves Lives Study Group

The paper chosen for this week’s journal club has had an impact on patient safety worldwide: As an F1 during my colorectal surgery job (on the rare occasions I went to theatre) I saw how this paper has changed practice with the implementation of the WHO safe surgery checklist.

In the surgical setting it has been estimated that almost half of all complications are unavoidable. This is a huge patient safety issue. In 2008, the WHO published guidelines to ensure the safety of surgical patients. From this, the authors of the NEJM paper designed a 19 item checklist with the with the aim of reducing surgical complications and its subsequent morbidity and mortality.

The surgical safety checklist is a simple intervention, a checklist that is followed at three key points with the whole surgical team present – before the induction of anaesthesia, before skin incision and before the patient leaves the operating theatre. The primary endpoint of the study was the occurrence of any major complication, including death, during a period of postoperative hospitalisation, up to 30 days (complications were defined as outlined in the American College of Surgeons’ National Surgical Quality Improvement Program).

The trial was run at eight sites in a range of healthcare settings worldwide. Before the checklist was implemented at the trial sites, baseline data, including complication rates, were reported for 3,733 patients at all trial sites. The checklist was then implemented, consecutively enrolling patients over the age of 16 years undergoing non-cardiac surgery. During the pre-checklist period the rate of any complication at all sites was 11%. After the implementation of the checklist this fell to 7% (P<0.001). The total in-hospital rate of death fell from 1.5% to 0.8% (P=0.003). The authors of the paper concluded that:

 Applied on a global basis, this checklist program has the potential to prevent large numbers of deaths and disabling complications, although further study is needed to determine the precise mechanism and durability of the effect in specific settings.

According to the WHO over 3000 hospitals worldwide have now implemented the surgical safety checklist, an impressive figure that shows how research can translate into a world-wide change in practice. Tonight night at 8.00pm BST we will be discussing this paper – a list of discussion points will be posted shortly. I look forward to another interesting and lively debate.

When I was introducing this paper I chose to highlight that one of the reviewer’s thoughts that it was a poor quality study. I don’t know  if that influenced the discussion or even non-participation in last week’s #twitjc, but there were several tweets expressing disappointment with the paper during the discussion. At first glance this appeared to be an accessible paper on medical education which would provoke a lot of discussion. But when you look more closely it contains complex analyses; the results of which many of those reading the paper did not manage to get to grips with. The poor presentation of some of the results in the additional file did not help. And the authors reach conclusions which are hard to justify.

Overall the main finding of the research was that medical schools seemed to have little impact on how students performed in post-graduate examination. The better the intake of students were at passing exams at 18, then the more likely they were to pass exams a decade later. This prompted me to ask if that suggested that rather than a national exit exam we needed a national entrance exam. @twsy suggested that if we wanted to look at the ‘value added’ by the medical school then we would need a national entrance and exit exam.

Some medical schools have graduates who take longer to pass professional exams. Is this an issue that should concern medical schools? And if it is what should we do about it? The correlation demonstrated in this paper suggests that if we wanted to have uniform outcomes for graduates of all medical schools then we should have uniform intake. It is unlikely that it would be socially acceptable to make students complete a national entrance exam and then allocate them to medical students across the UK to ensure an equal mix of academic performance. So we are left with the current situation.

We asked if performance of graduates in post-grad exams was a good indicator of the performance of a medical school. We didn’t think that it was, but we weren’t sure how performance of a medical school should be assessed, or if it should be at all. As an aside there was some discussion about what would make a good doctor at the individual level. Was it ‘head knowledge’ or good communication? It was pointed out that UKFPO was now trialling an assessment of situational judgement as a way of allocating doctors to further training. This is certainly something I would like to learn more about.

Access to the reviewers’ comments was generally lauded. We would like to see this more, as it can help understanding of the paper. In my own opinion it would be interesting to have seen some editorial comment on how two such different reviews were rationalised so that the outcome was that the paper was published.

I know that some people missed out on participating in this discussion, so I hope that you will take the opportunity to leave a comment here.

Should we have discussed this paper? Yes, we should. Many people will have heard of it before, and now they hopefully have a better understanding of it’s findings and linitations. #win!

Thank you Fi for asking me to write an introduction and suggest some discussion points for tonight’s Twitter journal club. Having a special interest in medical education I was very happy to see this paper by McManus et al. suggested for Week 4 of the journal club.

Discussion point 1: What factors do you think might explain variation in performance in MRCP between medical schools?

What did authors do? They looked at outcomes in MRCP (Member of the Royal College of Physicians) examination for entrants from all medical schools between 2003 and 2005. They found that in the Part 1 and 2 exams, Cambridge, Oxford, and Newcastle graduates did significantly better than average , and the performance of Liverpool, Aberdeen, Dundee and Belfast students was significantly worse. In the PACES section (a clinical examination based on a modified OSCE) Oxford students performed significantly and Liverpool, Dundee and London students significantly worse.
This first part of the analysis is quite easy to understand but the authors then go on to construct a multi-level model to see if they can explain variation between the medical schools.
Since it is known that ethnicity and gender are correlated with MRCP performance, and they had this as individual level data, they adjusted for this.

Discussion point 2: Is it surprising that the average offer to those applying to a medical school may predict performance of graduates in MRCP?
Two complex analyses were performed in this study: a multilevel model, and a structural equation model. Unfortunately the results of the multilevel model are not produced in an easy to understand format, although there is a figure in an additional file which is downloadable.
The authors looked for correlations between medical school performances in MRCP and a plethora of other factors. This information was pulled from other sources such as the Guardian tables, a survey of the cohort of medical students who started university in 1990/1, and the offers which each medical school made to students in the mid-1990s.
They found correlations between:
– Offers made to students (A level or Scottish higher grades)- the higher the offer the better the performance
– The proportion of final year medical students reporting being interested in a career as a physician, and reporting interesting medical teaching, and better performance in MRCP.
– The higher the percentage of graduates taking MRCP, the better the performance.
However when these factors were analysed together, it was only admission grades that seemed significant.
They also looked at correlations with data from the Guardian tables. In a multiple regression, again only admission criteria were found to be significant.
In the multilevel model, the entrance qualifications of graduates were found to explain 62% of variance, which in this type of study is a large amount. 38% of variance remained unexplained but a commenter has suggested that the contribution of entrance qualifications may be under-estimated because of the ‘ceiling effect’- many entrants may have been offered the highest grades of 3 As.

Discussion point 3: Are the authors correct to conclude that this analysis suggests that a national exit exam should be introduced?
What do the results of this study mean? To place the study in context it is perhaps useful to start with the last words of the authors in the paper. They believe that this analysis supports the case for the “introduction of a national licensing examination” in the UK.
We don’t have a national exit exam in the UK. Instead the General Medical Council (GMC) regulates medical education through individual medical schools. Quality assurance of medical education and the final exams which must be passed to gain provisional entry to the medical register rests with the GMC and external examiners.
This analysis shows that different medical schools admit students with different school qualifications, and that the higher the entrance requirements, the greater the success may subsequently be in MRCP. McManus et al refer to a study which shows this is also true of performance in MRCGP exams, but I cannot find that publication.

Discussion point 4: How should we judge the performance of medical schools? Is performance of graduates in post-graduate examinations important?
When this study was published it contributed to discussion about whether a national exit exam should be introduced. Ian Noble, a Sheffield medical student, writing in the BMJ, suggested that medical schools should be judged on whether the graduates they produced performed as competent foundation doctors, not on how well graduates performed in subsequent examinations. Since it is rare for graduates to be pulled up for poor clinical performance this suggests that we have no problem for a national exit exam to solve.

Discussion point 5: How helpful is it to read reviewer’s comments on a paper? Is this something that all journals should aim for?
This paper is published in BMC Medicine which also posts the comments of the peer-reviewers. One of the reviewers suggested that this paper should not be published as although it involved a commendable analysis of multiple datasets it did not “help me to understand the problems in medical education better, nor does it help me to improve medical education or to advance medical education as a science”. The authors’ response to this criticism is also published. The reviewers and the main author also had a dispute over another analysis published in BMC Medicine on gender and ethnicity and success in MRCP. But that discussion was pre-submission so is private correspondence between those involved.

Conflict of Interest: I’m a Belfast graduate!

%d bloggers like this: