The Student Room Group

Reactions to new study in favour of a national licensing exam

Background: The UK General Medical Council has emphasized the lack of evidence on whether graduates from different UK medical schools perform differently in their clinical careers. Here we assess the performance of UK graduates who have taken MRCP(UK) Part 1 and Part 2, which are
multiple-choice assessments, and PACES, an assessment using real and simulated patients of clinical examination skills and communication skills, and we explore the reasons for the differences between medical schools.

Method: We perform a retrospective analysis of the performance of 5827 doctors graduating in UK medical schools taking the Part 1, Part 2 or PACES for the first time between 2003/2 and 2005/3, and 22453 candidates taking Part 1 from 1989/1 to 2005/3.

Results: Graduates of UK medical schools performed differently in the MRCP(UK) examination between 2003/2 and 2005/3. Part 1 and 2 performance of Oxford, Cambridge and Newcastle-upon-Tyne graduates was significantly better than average, and the performance of Liverpool, Dundee, Belfast and Aberdeen graduates was significantly worse than average. In the PACES (clinical) examination, Oxford graduates performed significantly above average, and Dundee, Liverpool and London graduates significantly below average. About 60% of medical school variance was explained by differences in pre-admission qualifications, although the remaining variance was still significant, with graduates from Leicester, Oxford, Birmingham, Newcastle-upon-Tyne and London overperforming at Part 1, and graduates from Southampton, Dundee, Aberdeen, Liverpool and Belfast underperforming relative to pre-admission qualifications. The ranking of schools at Part 1 in 2003/2 to 2005/3 correlated 0.723, 0.654, 0.618 and 0.493 with performance in 1999–2001, 1996–1998, 1993–1995 and 1989–1992, respectively.

Conclusion: Candidates from different UK medical schools perform differently in all three parts of the MRCP(UK) examination, with the ordering consistent across the parts of the exam and with the differences in Part 1 performance being consistent from 1989 to 2005. Although pre-admission qualifications explained some of the medical school variance, the remaining differences do not seem to result from career preference or other selection biases, and are presumed to result from unmeasured differences in ability at entry to the medical school or to differences between medical schools in teaching focus, content and approaches. Exploration of causal mechanisms would be enhanced by results from a national medical qualifying examination.

The full report is here: http://www.biomedcentral.com/content/pdf/1741-7015-6-5.pdf

Any thoughts?
Reply 1
:eek: But I thought all medical schools were exactly the same? :wink:




(Intelligent response may follow when I am free from exams... Thank you for posting - will be interesting to read!)
Reply 2
Would be interesting to see a breakdown of London results pre and post 1995 - lots of changes in the local medical infrastructrues saw massive, massive changes in the way medical students were taught... We used to very traditional (pre-1999), with a subject based didactic course (the "1999 curriculum"). It's all very different now!

Will read the full report when I get a sec!
i've read it, it's hardly profound. a lot of it seems to be devoted to saying over and over again that brighter students perform better at exams....

the most interesting thing i noticed was that students perform worse the longer they've been practicing when they take it. this suggests to me that the test itself may be somewhat flawed. what is it trying to test? it doesn't appear to be testing things that are used clinically as much as it should if experience isn't an advantage. then again it could be that

there is little discussion of how the situation has changed (is changing?) as a result of new curriculums and i would suggest that it's too soon to do so for many places. the other thing is that they haven't taken into account the huge variation in offer uptakes across universities. the starting qualifications used are those of the students offered a place not those starting. i know that here many of those with the highest ucas tarrifs tend to go elsewhere post offer :wink: seems a tad unfair to penalise you for students you never got. i would however suspect that oxbridge has a fairly high uptake percentage although i haven't seen data.

there's no discussion of how they've dealt with intercalation. do you think it serves as an advantage for a student when it comes to the exams? do results vary between schools that have varying percentages of intercalaters? there's certainly no discussion of how it moves people around cohorts and i suspect from the detail of the rest of their data, they have no way of doing this.

there's a HUGE gap in the middle of their data in some categories. where's all the data for 1992-2002 surely this is where all the curriculum changing action was going on. yet they've used data in other categories without the corresponding data to compare it to. plus i think they published before they were ready. there's only 2 years data used on this side of that big gap they could do with a couple years more i think, since when could you call something an average with only 2 years of data?

they also mention that the biggest variables are things like race, gender.... then go on not to mention these again throughout :confused:

in fact i could moan about this study for a while. guess who's been doing critical appraisals all afternoon....

on the subject of a national exam, i don't know what i think of it. i know that my school here don't take huge amounts of responsibility/credit for how we do. it is very self-directed. there's loads and loads of support there but only if you take it and work for yourself. i think if they were held to account more directly this would probably revert back to a certain extent to a more involved approach. i also think that it might remove several aspects of my curriculum that are a bit quirkier, not so useful when it comes to exams, but maybe more useful in a life/sanity sense. i'd be sad to see them go.

however, i also think that i'd be happy to sit a national exam were the results used responsibly. i think it'd be wrong just to rank applicants based on the results of this test. this is the one thing i like about the quartile system, it gives a general idea of how an applicant is performing academically without getting to the stage where whether you guessed b or c on one random question in the middle of a huge test determines your future.

plus what sort of topics would be on it? would it all be like a&p, pharm, clinical stuff or would there be things like public health and ethics discussions in there? how would this account for differences in med school preference. ie, when we do our sociology stuff we use the models our med school favours but there are thousands of others that just aren't the system we use, how do you account for that? do we all have to learn the same one? if you remove it altogether, what's to stop medical schools just teaching to the exam and skipping potentially useful (but not examined :wink:) aspects kinda like they do back in schools. what happened to learning for the sake of learning.

sorry that's really verbose :redface: i'm revising for my pp exam....
Reply 4
I've not the read full thing, will have a poke around when I can.

I think you hit on the crux of it at the end though Bright Star, that all any exam can be is a surrogate marker. Now, MRCP results against medschool attended is interesting in of itself and the data mentioned in the abstract is worth following up on. Following patterns, particularly in the London schools as they close, merge and shift their curricula from fairly uniform to about as diverse as they could get would undoubtedly be interesting. Seeing as this is an exam a significant number of graduates have to take and the only way into senior medicine, I would've thought it'd be something measured as a matter of routine rather than exception to be honest.

The danger is trying to use the data to say anything more than that really. Defining being a 'good doctor', even if you limit it strictly to clinical competence and completely ignore colleague and patient communication and teaching skills, is a ridiculously slippery thing to try and do. Its that last bit that gives me a kneejerk aversion to putting all the weight behind a national license. That being said, there isn't a medschool model I've heard of that doesn't involve a Great Big Exam of some nature and whenever you do that you get people teaching and learning to it, so maybe it won't make all that much difference.
Reply 5
I havent read this thing, but does it take account of the new medschools that have recently come onto the scene? to be fair to them, they'll been a few years to get over teething probelems and the like.

Also, it strikes me that for PBL courses, surely built in theres going to be a huge degree of variation regardless of teaching? Won't this have a wild pendulum effect?
Reply 6
Wangers
I havent read this thing, but does it take account of the new medschools that have recently come onto the scene? to be fair to them, they'll been a few years to get over teething probelems and the like.


MRCP? Hmm, I'm not sure that they will have had any attempt it yet - there's necessarily going to be a lag with this data because you can only sit it 18 months after graduating at the earliest! :p:

Peninsula's first grads were summer 2007
BSMS first cohort is this year.
HYMS also seems to be this year.
Have i forgotten anyone?
Reply 7
Elles
MRCP? Hmm, I'm not sure that they will have had any attempt it yet - there's necessarily going to be a lag with this data because you can only sit it 18 months after graduating at the earliest! :p:

Peninsula's first grads were summer 2007
BSMS first cohort is this year.
HYMS also seems to be this year.
Have i forgotten anyone?


Like I said, I haven't read it :p: :redface:
the other thing that they haven't mentioned is that we take finals a year before everyone else. that's a whole year to be forgetting all that medicine bull****. considering that as they said, performance seems to be linked to how long it's been since you graduated.

okay i think i've spent enough of my evening looking at this, might get back to analysing my exam paper about TB....
bright star
the other thing that they haven't mentioned is that we take finals a year before everyone else. that's a whole year to be forgetting all that medicine bull****. considering that as they said, performance seems to be linked to how long it's been since you graduated.

okay i think i've spent enough of my evening looking at this, might get back to analysing my exam paper about TB....

Alot of MRCP stuff is highly clinically relevant stuff. I know because I'm doing practice MCQs at the moment. If you're the sort of person who feels it is 'bull****' then you aught be taking MRCS and taking the scalpal jockey route, not MRCP.

Someone said something about london schools camping pre 99 and post - would indeed be interesting. I maintain that the 'traditional' teaching methods, although at the times oft dry and seperate from clinical teaching gave much more a solid scientific basis for future learning. THe new 'trendy' ways of learning - often very much peer lead whilst perhaps being good in the short term lead to large gaps in knowledge and leave the student without a basis of reasoning to fall back on.

Aecdotes can give evidence either way so are useless, but I will tella story from last week.
A friend had a patient who was an alcholoic, but had come in vomiting AND withdrawing. She prescribed chlordiazepoxide - as one should. however this is an oral preperation. The guy couldn't keep the pills down. I was dealing with my own patient in the same bay so saw what was going on and suggested IM diazepam. She is a london taught doctor, very PBL. She demanded to know why i would give im diazepam if he wasn't fitting. For the same reason as i would give chlordiazepoxide, i reply.

Turns out that she didn't know chlord was a nezo. Didn't know the action they worked through, nor the basis of DTs. Her pharm knowledge and knowledge of neuro was too poor to fall back on.
Rubbish example perhaps, but it brought it home to me. I had never used diazepam ona drunk before, but logic dictated to me that it would work, what route and dose would be safe etc.
Reply 10
Reading the BBC article, the one that the punters would see, I'm struck by how loosely it's all linked.

The key sentence is;
We assume, because there are national standards, that all graduates have the necessary skills, but there is no way of comparing performance.
Maybe it's just me, but I'm not sure how they made the leap from clinical competencies to postgrad exam performance.
Reply 11
The quotes in that article really did make me wonder, it was quite typical of the way people seem to think now. One surrogate marker to test an abstract concept is set against another, just running an enormous network of targets for targets to reflect targets.

Like I say, the difference in MRCP performance is something that's worth following up with greater detail rather than the barren way this report appears. There is some effort to explain these differences but it makes no sense at all to me (Is there more to this report than the link above or is that really it?). Because to the paranoid blogger part of my brain, it feels like a starting-from-a-conclusion data grab to justify the national qualifying exam policy.

So we make a national qualifying exam? So we can better prepare students to sit the MRCP? So they can then do what? Get the name consultant or 'junior consultant' and we can all look at all the great 'consultants' we're rattling out? So when no-one studies anything that isn't directly tested and can't cope without NICE hand-holding and a big number to work to will we have improved UK medicine? Everyone is passing around pieces of paper to create the illusion of responsibility when in fact its exactly its what we're trying to avoid. By setting target against qualification and best practice guideline we avoid liability for anything that might go wrong when it all hits the fan.

I'm starting to sound like a certain Ethics and Law lecturer here and maybe slightly paranoid but then again, this is how the rest of the world works.

Latest

Trending

Trending