The Student Room Group

Scroll to see replies

Original post by crazylemon
Exams annoy me in some respect but better than assessed coursework

My preferred method would be total continuous assessment but that would be impossible


Back to assessment methods.

I think that progress testing with finals multiple times each year would definitely be beneficial in clinical years at all medical schools.
Original post by carcinoma
I think that progress testing with finals multiple times each year would definitely be beneficial in clinical years at all medical schools.

Concurred.
Original post by carcinoma
Back to assessment methods.

I think that progress testing with finals multiple times each year would definitely be beneficial in clinical years at all medical schools.


I would rather medical school did not become like sixth form where it was a constant run of exams that meant teaching was to the test rather than for the ultimate purpose of gaining knowledge.
Original post by Becca-Sarah
I would rather medical school did not become like sixth form where it was a constant run of exams that meant teaching was to the test rather than for the ultimate purpose of gaining knowledge.


Quite the contrary.

Progress testing would eliminate that, testing beyond the level of the students current progress within the course would detract from cramming.

These tests would be independent from the curriculum. There can be no intensive pre test revision strategies to distract from the designed learning of the curriculum(1). This prevents a superficial test driven approach to study (2).

1. Newble DI, Jaeger K. The effect of assessments and examinations on the learning of medical students. Med Educ 1983; 17(3):165-171.

2. Blake JM, Norman GR, Keane DR, Mueller CB, Cunnington J, Didyk N. Introducing progress testing in McMaster University's problem-based medical curriculum: psychometric properties and effect on learning. Acad Med 1996; 71(9):1002-1007.
Original post by Becca-Sarah

Original post by Becca-Sarah
I would rather medical school did not become like sixth form where it was a constant run of exams that meant teaching was to the test rather than for the ultimate purpose of gaining knowledge.


Agreed but there has still got to be some sort of assessment to proove the teaching is allowing you to gain this knowledge. Correct me if i'm wrong but i'd feel a series of synoptic smaller exams/assessments would be more beneficial to ensuring knowledge is retained than one big lump exam, clinically and pre-clinically.
Original post by carcinoma
Quite the contrary.

Progress testing would eliminate that, testing beyond the level of the students current progress within the course would detract from cramming.

These tests would be independent from the curriculum. There can be no intensive pre test revision strategies to distract from the designed learning of the curriculum(1). This prevents a superficial test driven approach to study (2).

1. Newble DI, Jaeger K. The effect of assessments and examinations on the learning of medical students. Med Educ 1983; 17(3):165-171.

2. Blake JM, Norman GR, Keane DR, Mueller CB, Cunnington J, Didyk N. Introducing progress testing in McMaster University's problem-based medical curriculum: psychometric properties and effect on learning. Acad Med 1996; 71(9):1002-1007.




Dude, you did not just reference your own post...
Original post by Isometrix
Dude, you did not just reference your own post...


I think I did, that made me feel unclean.

(To be fair I'm writing an SSC right now, so i think referencing may have taken over my life)
Original post by gozatron
Agreed but there has still got to be some sort of assessment to proove the teaching is allowing you to gain this knowledge. Correct me if i'm wrong but i'd feel a series of synoptic smaller exams/assessments would be more beneficial to ensuring knowledge is retained than one big lump exam, clinically and pre-clinically.


I think there is a need to test, of course, but there's enough crammed into the medical degree as it is (sadly at the expense of being able to learn much in detail, in most curriculums), to make space for more frequent assessment. And there is a need to check that you know everything you should know - after all, your patient is not going to present with only the thing you learned most recently.

I guess my beef with any form of nationalised testing, whether that be progress testing or any other exam, is that regardless of whether you can teach to it or not, it will influence curriculum design, and it will always be inherently biased towards certain schools - is this not why there is as yet no agreed exam, because certain schools are pushing for in depth science and others are pushing for communication skills, etc, all at the greatest benefit to their own students?
Original post by gozatron
Agreed but there has still got to be some sort of assessment to proove the teaching is allowing you to gain this knowledge. Correct me if i'm wrong but i'd feel a series of synoptic smaller exams/assessments would be more beneficial to ensuring knowledge is retained than one big lump exam, clinically and pre-clinically.


True, but what material would you use for the clinical regular knowledge examinations?

However, using finals material for first years isn't shown to be very reliable.

Original post by Becca-Sarah
I think there is a need to test, of course, but there's enough crammed into the medical degree as it is (sadly at the expense of being able to learn much in detail, in most curriculums), to make space for more frequent assessment. And there is a need to check that you know everything you should know - after all, your patient is not going to present with only the thing you learned most recently.

I guess my beef with any form of nationalised testing, whether that be progress testing or any other exam, is that regardless of whether you can teach to it or not, it will influence curriculum design, and it will always be inherently biased towards certain schools - is this not why there is as yet no agreed exam, because certain schools are pushing for in depth science and others are pushing for communication skills, etc, all at the greatest benefit to their own students?



That is the exact reason for the lack of a national exam. However what would be so wrong with in-house multiple exams at finals level? For students in clinical years?

Surely repeated testing of the final outcome expected would be beneficial?
(edited 12 years ago)
Original post by carcinoma
True, but what material would you use for the clinical regular knowledge examinations?

However, using finals material for first years isn't shown to be very reliable.

That is the exact reason for the lack of a national exam. However what would be so wrong with in-house multiple exams at finals level? For students in clinical years?

Surely repeated testing of the final outcome expected would be beneficial?


I fail to see how repeated testing of the same material (which I presume the progress test to be, else how would it show progress over time?) shows anything except how many questions a student can memorise from previous papers? Surely repeated testing of finals material just massages the results? To my mind, testing should be appropriate to the level of teaching - I can't see the point in presenting students with questions that they haven't the expertise to answer. Else why not just replace finals with questions taken from the MRCP/S/GP and see how students perform? We have finals mocks in December and the real thing in June of fourth year, with an OSCE in fifth year, and to me that seems an appropriate amount of testing without continually eroding practical learning time. At clinical level, you're being tested every day on the wards, in a much more realistic setting than an exam hall - in real life you can't go "I'll think of the answer in a minute, I'll just leave that patient and come back to it later".
Original post by Becca-Sarah
I fail to see how repeated testing of the same material (which I presume the progress test to be, else how would it show progress over time?) shows anything except how many questions a student can memorise from previous papers? Surely repeated testing of finals material just massages the results? To my mind, testing should be appropriate to the level of teaching - I can't see the point in presenting students with questions that they haven't the expertise to answer. Else why not just replace finals with questions taken from the MRCP/S/GP and see how students perform? We have finals mocks in December and the real thing in June of fourth year, with an OSCE in fifth year, and to me that seems an appropriate amount of testing without continually eroding practical learning time. At clinical level, you're being tested every day on the wards, in a much more realistic setting than an exam hall - in real life you can't go "I'll think of the answer in a minute, I'll just leave that patient and come back to it later".


I am not suggesting using the exact same material, I am suggesting the use of 80% of clinical medicine (the level needed for FY1s) with 20% Basic sciences material. This would be a vast array of different questions.

It would be far too hard to prepare for that kind of test. It realistically would not erode practical learning time as an exam would only be about 3 hours long and delivered every 8 weeks. It would in principle, sample at regular intervals from the complete domain of knowledge considered a requirement for medical students on completion of the undergraduate programme
(edited 12 years ago)
Original post by Becca-Sarah
I fail to see how repeated testing of the same material (which I presume the progress test to be, else how would it show progress over time?) shows anything except how many questions a student can memorise from previous papers? Surely repeated testing of finals material just massages the results? To my mind, testing should be appropriate to the level of teaching - I can't see the point in presenting students with questions that they haven't the expertise to answer. Else why not just replace finals with questions taken from the MRCP/S/GP and see how students perform? We have finals mocks in December and the real thing in June of fourth year, with an OSCE in fifth year, and to me that seems an appropriate amount of testing without continually eroding practical learning time. At clinical level, you're being tested every day on the wards, in a much more realistic setting than an exam hall - in real life you can't go "I'll think of the answer in a minute, I'll just leave that patient and come back to it later".


Actually providing that questions from MRCP/S/GP are delivered in equal proportions, numerous times each year, and that the passmark expected from a 5th year in their final exam is lowered to an appropriate level (e.g 40%), I cant see how this would be a bad thing.

But the problem with that is that the material is what is expected of doctors in speciality training. If you replace the material with what is required for a doctor in their first year of the foundation programme you have an exam with content at the appropriate level and a reliable testing method, as required by Tomorrows Doctors.
(edited 12 years ago)
Original post by carcinoma
I am not suggesting using the exact same material, I am suggesting the use of 80% of clinical medicine (the level needed for FY1s) with 20% Basic sciences material. This would be a vast array of different questions.

It would be far too hard to prepare for that kind of test. It realistically would not erode practical learning time as an exam would only be about 3 hours long and delivered every 8 weeks. It would in principle, sample at regular intervals from the complete domain of knowledge considered a requirement for medical students on completion of the undergraduate programme


You're not addressing a lot of my points though. What is the point in testing students at a level above that required at that stage of their training? Why is continual assessment even needed? Surely practical assessment of knowledge on a daily basis (i.e. ward round questions, 'does this student know what they're talking about re this patient's diagnosis, management, etc?' and so on) is far more appropriate than exams every 8 weeks? I'd be fairly fed up if I had to sit a 3hr exam every 8 weeks! Exam stress is quite sufficient without making it any more frequent!
carcinoma
x


Apologies, we seem to be cross-posting, so I missed your subsequent comments.

How demoralising is that, to be 'passing' an exam when you get 60% of the questions wrong?! Medical school is supposed to encourage, not beat down, its students.
Original post by Becca-Sarah
You're not addressing a lot of my points though. What is the point in testing students at a level above that required at that stage of their training? Why is continual assessment even needed? Surely practical assessment of knowledge on a daily basis (i.e. ward round questions, 'does this student know what they're talking about re this patient's diagnosis, management, etc?' and so on) is far more appropriate than exams every 8 weeks? I'd be fairly fed up if I had to sit a 3hr exam every 8 weeks! Exam stress is quite sufficient without making it any more frequent!


Apologies for not addressing all your points. Ill try to do that now, assessment at the stage of students training allows cramming as the students know what material they will be tested on. It does not encourage continuous learning, if you can just learn for the exams in a "revision/cramming" period, this would allow people to pass exams without developing knowledge regularly.

Continuous assessment removes the stress of exams as they are so frequent and based on material which cannot be prepared for in the earlier stages of the course, and due to the vast array of question areas even in later stages it requires continuous knowledge acquisition rather than revision/last minute cramming.

Of course practical knowledge assessment would be the ideal, however, that kind of on the spot assessment can only assess a relatively small volume of knowledge, so that would be an inappropriate method of testing medical knowledge as a whole.

OK well every 10 weeks rather than 8 weeks (4 times each year), personally i find it less stressful around exam periods due to the frequency and low expectation. I'm only expected to be achieving ~25%.
(edited 12 years ago)
Whilst we're on the topic of exams, what do people think of not disclosing marks and scores and simply having a pass or fail system? Obviously marks and the like are collected for foundation post ranking purposes and analysing the examination system etc., but they are not published for students. The students are just told if they passed or failed (or any borderline-type marks that some schools give out).

I feel like it might reduce some of the petty competition that seems rife amongst medical students and facilitate students learning from each other.
Original post by carcinoma

Original post by carcinoma
Anastomosis (with some angiogenesis)


No that's not it :nah:

There is a specific term for it that only means that one specific thing. And it's not angiogenesis - the arteries are already there, they just open up more to accommodate more blood to become the major artery in the limb.


Original post by Kinkerz
Very much, just paper-based rather than web-based.


See our "logbook" is a written up patient history/examination findings for proforma A, and then the proforma B is the background of the disease (aetiology, presentation, investigations management etc) and we have minimum of 11 to do each module :sadnod:
Original post by Kinkerz
Whilst we're on the topic of exams, what do people think of not disclosing marks and scores and simply having a pass or fail system? Obviously marks and the like are collected for foundation post ranking purposes and analysing the examination system etc., but they are not published for students. The students are just told if they passed or failed (or any borderline-type marks that some schools give out).

I feel like it might reduce some of the petty competition that seems rife amongst medical students and facilitate students learning from each other.


That's how it's being done at my uni now for year 1s and 2s. Our year was the last cohort to go through the grade system for exams (A-distinction, B-merit etc etc E-fail). I agree that it did make people super competitive, but i wouldn't say it's a bad thing. Those who worked harder than others were suitably rewarded with a higher grade, so its only fair.
Original post by Isometrix
That's how it's being done at my uni now for year 1s and 2s. Our year was the last cohort to go through the grade system for exams (A-distinction, B-merit etc etc E-fail). I agree that it did make people super competitive, but i wouldn't say it's a bad thing. Those who worked harder than others were suitably rewarded with a higher grade, so its only fair.

In principle I agree with you, but I've experienced idiotic things like people being reluctant to disclose a resource they got a certain piece of information from or not contributing in PBL so as to avoid disseminating their work.
Original post by Kinkerz
In principle I agree with you, but I've experienced idiotic things like people being reluctant to disclose a resource they got a certain piece of information from or not contributing in PBL so as to avoid disseminating their work.


:no: that's just sad

Having said all that before, I would totally advocate a pass-fail system because I was never one of those who got the high grades anyway :tongue:

Latest

Trending

Trending