Thank you Fi for asking me to write an introduction and suggest some discussion points for tonight’s Twitter journal club. Having a special interest in medical education I was very happy to see this paper by McManus et al. suggested for Week 4 of the journal club.
Discussion point 1: What factors do you think might explain variation in performance in MRCP between medical schools?
What did authors do? They looked at outcomes in MRCP (Member of the Royal College of Physicians) examination for entrants from all medical schools between 2003 and 2005. They found that in the Part 1 and 2 exams, Cambridge, Oxford, and Newcastle graduates did significantly better than average , and the performance of Liverpool, Aberdeen, Dundee and Belfast students was significantly worse. In the PACES section (a clinical examination based on a modified OSCE) Oxford students performed significantly and Liverpool, Dundee and London students significantly worse.
This first part of the analysis is quite easy to understand but the authors then go on to construct a multi-level model to see if they can explain variation between the medical schools.
Since it is known that ethnicity and gender are correlated with MRCP performance, and they had this as individual level data, they adjusted for this.
Discussion point 2: Is it surprising that the average offer to those applying to a medical school may predict performance of graduates in MRCP?
Two complex analyses were performed in this study: a multilevel model, and a structural equation model. Unfortunately the results of the multilevel model are not produced in an easy to understand format, although there is a figure in an additional file which is downloadable.
The authors looked for correlations between medical school performances in MRCP and a plethora of other factors. This information was pulled from other sources such as the Guardian tables, a survey of the cohort of medical students who started university in 1990/1, and the offers which each medical school made to students in the mid-1990s.
They found correlations between:
- Offers made to students (A level or Scottish higher grades)- the higher the offer the better the performance
- The proportion of final year medical students reporting being interested in a career as a physician, and reporting interesting medical teaching, and better performance in MRCP.
- The higher the percentage of graduates taking MRCP, the better the performance.
However when these factors were analysed together, it was only admission grades that seemed significant.
They also looked at correlations with data from the Guardian tables. In a multiple regression, again only admission criteria were found to be significant.
In the multilevel model, the entrance qualifications of graduates were found to explain 62% of variance, which in this type of study is a large amount. 38% of variance remained unexplained but a commenter has suggested that the contribution of entrance qualifications may be under-estimated because of the ‘ceiling effect’- many entrants may have been offered the highest grades of 3 As.
Discussion point 3: Are the authors correct to conclude that this analysis suggests that a national exit exam should be introduced?
What do the results of this study mean? To place the study in context it is perhaps useful to start with the last words of the authors in the paper. They believe that this analysis supports the case for the “introduction of a national licensing examination” in the UK.
We don’t have a national exit exam in the UK. Instead the General Medical Council (GMC) regulates medical education through individual medical schools. Quality assurance of medical education and the final exams which must be passed to gain provisional entry to the medical register rests with the GMC and external examiners.
This analysis shows that different medical schools admit students with different school qualifications, and that the higher the entrance requirements, the greater the success may subsequently be in MRCP. McManus et al refer to a study which shows this is also true of performance in MRCGP exams, but I cannot find that publication.
Discussion point 4: How should we judge the performance of medical schools? Is performance of graduates in post-graduate examinations important?
When this study was published it contributed to discussion about whether a national exit exam should be introduced. Ian Noble, a Sheffield medical student, writing in the BMJ, suggested that medical schools should be judged on whether the graduates they produced performed as competent foundation doctors, not on how well graduates performed in subsequent examinations. Since it is rare for graduates to be pulled up for poor clinical performance this suggests that we have no problem for a national exit exam to solve.
Discussion point 5: How helpful is it to read reviewer’s comments on a paper? Is this something that all journals should aim for?
This paper is published in BMC Medicine which also posts the comments of the peer-reviewers. One of the reviewers suggested that this paper should not be published as although it involved a commendable analysis of multiple datasets it did not “help me to understand the problems in medical education better, nor does it help me to improve medical education or to advance medical education as a science”. The authors’ response to this criticism is also published. The reviewers and the main author also had a dispute over another analysis published in BMC Medicine on gender and ethnicity and success in MRCP. But that discussion was pre-submission so is private correspondence between those involved.
Conflict of Interest: I’m a Belfast graduate!