Hey there! Sign in to join this conversationNew here? Join for free
    • Thread Starter
    Offline

    2
    ReputationRep:
    What is it?
    Offline

    16
    ReputationRep:
    As I understand it = your mark depends on how your cohort do.

    E.g. They decide that of everyone who takes the exam 20% will get the top grade, & 20% will get the bottom grade & everyone else will get the middle grade. Or any other fixed %.

    In comparison to norm referenced marking where they decide what the standards are for each grade/mark & then mark according to that - so everyone could get the top grade or everyone could get the lowest or anything in between.

    So ideally/theoretically:
    - So competence exams (is this person good enough to be a Doctor?) = norm referenced.
    - Aptitude exams (where you want to really be able to distinguish between people who might all be pretty good) or exams from hardcore meany examiners = cohort marked.


    But if you've read something specific that refers to cohort marking look for a more detailed definition that they give?
    Offline

    2
    Basically, it provides a way to convert an exam's raw score into an honours/ pass / fail grade using the overall statistical performance of the cohort to define the details of the conversion. This approach is more fair than strict percentage-point grading which sets arbitrary boundaries without regard to the overall performance of the cohort.
    Offline

    16
    ReputationRep:
    (Original post by JCM89)
    This approach is more fair than strict percentage-point grading which sets arbitrary boundaries without regard to the overall performance of the cohort.
    What % of a medical school cohort should be failed to keep things fair do you think?


    (Surely deciding where to draw the 'statistical boundaries' to define is just as arbitrary? :p:
    Although we did have a Consultant who argued for overt cohort marked medical finals due to greater faith in cohort similarities than decisions on question difficulty and paper setting... but we could never pin him down to a 'fair' %!)
    Offline

    14
    ReputationRep:
    (Original post by Elles)
    What % of a medical school cohort should be failed to keep things fair do you think?


    (Surely deciding where to draw the 'statistical boundaries' to define is just as arbitrary? :p:
    Although we did have a Consultant who argued for overt cohort marked medical finals due to greater faith in cohort similarities than decisions on question difficulty and paper setting... but we could never pin him down to a 'fair' %!)
    I'd agree with your consultant, a neuroanatomist might think their question is fair but if most of the cohort fail it, it almost certainly wasn't.

    I think the fail mark should be arbitrary to an extent, I'm sure there are statistical ways of using percentages or standard deviations but I think that at some point there needs to be the ability of the medical school to identify those students who've failed and make sure that they're ****** off.
    • Thread Starter
    Offline

    2
    ReputationRep:
    thanks for the clear explanations
    is it standard practise in all Medical schools?
    • Thread Starter
    Offline

    2
    ReputationRep:
    ok so just for clarification if I were to get 60/100 for an exam, my grade would depend on what everyone else gets, so if everyone else gets an average of about 80 then basically I fail?
    Offline

    20
    ReputationRep:
    Not in Oxford pre-clinical part 'B's - 4/8 average over 3 essays (which has specific criteria) or its a viva/re-take. :yes:
    Offline

    7
    ReputationRep:
    For our preclins, the pass mark is the mean minus 2 SDs, scaled to make that mark 50%. This however does not in guarantee that someone fails, because the cohort does not follow a Poisson distribution.

    In clinics, we use the modified Angstroff method, where a group of examiners meet and review each question, deciding what percentage they believe a group of 100 minimally competent FY1s would get the question correct. They add up all the marks and they then do some weird scaling thing to get a pass mark. Don't really understand it, and it seems awfully complex, but basically, if you have 2 questions, one of which 75% of crap FY1s are expected to get right, and one of which 25% of crap FY1s are expected to get right, the pass mark would be 50%.
    Offline

    13
    ReputationRep:
    For our OSCEs we are cohort marked for each individual station. What they do is they have a marksheet with specific points that need to be met to get marks. Then the assessors can give their overall impression: pass/borderline pass/borderline fail/fail. All the borderline pass and borderline fail marks are then taken and an average mark found which is the mark for that station. Then they just add up all the individual station pass marks to get an overall mark.
    Offline

    14
    ReputationRep:
    (Original post by JCM89)
    This approach is more fair than strict percentage-point grading which sets arbitrary boundaries without regard to the overall performance of the cohort.
    Well, that depends. What if one years cohort are **** hot, while the next are not. You could fail 2009's exam because you're in a highly performing year (more likely when N is smaller), but if you had been in the year below, the same performance could be comfortably middle of the road...
    Offline

    14
    ReputationRep:
    (Original post by Hygeia)
    For our OSCEs we are cohort marked for each individual station. What they do is they have a marksheet with specific points that need to be met to get marks. Then the assessors can give their overall impression: pass/borderline pass/borderline fail/fail. All the borderline pass and borderline fail marks are then taken and an average mark found which is the mark for that station. Then they just add up all the individual station pass marks to get an overall mark.
    BL uses the Borderline Group Method for OSCEs to give the pass mark for each station too.
    Offline

    1
    ReputationRep:
    Manchester uses the cohort method too. For each exam 2 SDs below the mean mark for the cohort equals a fail. The marks aren't scaled however, so the fail/pass mark for an exam can vary greatly between years, e.g. the 2008 year 2 gastro exam had a pass mark of 48%, the 2009 year 2 gastro exam required 61% to pass.
    Offline

    14
    ReputationRep:
    (Original post by Fluffy)
    Well, that depends. What if one years cohort are **** hot, while the next are not.
    Do you think that really happens? In a statistically significant way?
    Offline

    14
    ReputationRep:
    (Original post by Renal)
    Do you think that really happens? In a statistically significant way?
    It does - and probably more than most medical schools will ever admit on the record. Do you not remember the massive failing year issue of a few years back (apols to the regulars that are a part of that cohort - no insult intended)? Or the Warwick issue where 3/4 of their first year failed first sit?
    Offline

    14
    ReputationRep:
    (Original post by Fluffy)
    It does - and probably more than most medical schools will ever admit on the record. Do you not remember the massive failing year issue of a few years back (apols to the regulars that are a part of that cohort - no insult intended)? Or the Warwick issue where 3/4 of their first year failed first sit?
    Why is it not more likely that this is the fault of the paper rather than the cohort themselves? :confused:
    Offline

    14
    ReputationRep:
    No - ~I know in one case the results of a given cohort were low through out, and at Warwick, 1/3 of the resitters failed the paper and the paper was allegedly "dumbed down" to avoid embarrasement to the medical school.
    Offline

    2
    ReputationRep:
    with cohort marking is it guaranteed that a certain number of people will fail for example in the case of Manchester where people below 2 standard deviations of the mean get a fail.

    so my understanding is that regardless of how well these people that are 2 standard deviations below the mean have actually done they still fail?
    Offline

    7
    ReputationRep:
    (Original post by trektor)
    with cohort marking is it guaranteed that a certain number of people will fail for example in the case of Manchester where people below 2 standard deviations of the mean get a fail.

    so my understanding is that regardless of how well these people that are 2 standard deviations below the mean have actually done they still fail?
    If the distribution of the cohort does not follow a bell curve (as is the case), then it's possible that no-one falls below mean minus 2 SD.
    • Thread Starter
    Offline

    2
    ReputationRep:
    (Original post by Spencer Wells)
    If the distribution of the cohort does not follow a bell curve (as is the case), then it's possible that no-one falls below mean minus 2 SD.
    :cry:
    you reminded me of my stats exam
 
 
 
  • See more of what you like on The Student Room

    You can personalise what you see on TSR. Tell us a little about yourself to get started.

  • Poll
    What's your favourite Christmas sweets?
  • See more of what you like on The Student Room

    You can personalise what you see on TSR. Tell us a little about yourself to get started.

  • The Student Room, Get Revising and Marked by Teachers are trading names of The Student Room Group Ltd.

    Register Number: 04666380 (England and Wales), VAT No. 806 8067 22 Registered Office: International House, Queens Road, Brighton, BN1 3XE

    Quick reply
    Reputation gems: You get these gems as you gain rep from other members for making good contributions and giving helpful advice.