Thousands of A-level students will be given the 'wrong' grades on results day: will you be one of them?

Concerned students looking at A-level results

In some subjects, more than 40% of students could receive 'incorrect' exam grades

You attended all the lessons, revised hard, practised past papers and read the spec from beginning to end – every base was covered. You did your best in the exams, trying to put all you learned into action. Everything went well.

Now it’s down to the exam board to make sure you get the grade you deserve.

But the fact is that your chances of getting the 'correct' grade may be as low as 55% in some subjects.

How can this be possible?

How reliable is exam marking?

This report by Ofqual (the body that regulates the exam system) gives a pretty clear answer to this question. Ofqual's analysis of the marking of all the main subjects at GCSE and A-level reveals a worrying picture.

infographic showing chances of students receiving the correct grades

The consistency of marking varies from subject to subject. In maths and science reliability is high – around 96% in maths (although this still means that one in 20 maths students may be receiving an 'incorrect' grade).

But in the more essay-based humanities subjects and English the levels of reliability drop to 60% or below. That means two out of five students may be awarded an 'incorrect' grade – either higher or lower than the grade they would have received if senior examiners had marked their papers - Ofqual call this the 'definitive' grade.

Why is some marking inconsistent?

Subjects vary in the type of knowledge they are based on. In maths and science there are rules and laws which need to be applied accurately to reach an answer. In English and history there are some facts but more marks will be allocated for analysis and interpretation. These are qualities that have to be judged by a marker so it’s much harder to arrive at a 'correct' mark.

Another factor is the type of question asked. The more closed a question, the easier it is to mark. For example, multiple choice questions have one answer, all the rest are wrong. However, marking an essay involves a lot more marker judgement: the mark scheme may have general ‘levels’ and the marker needs to place the answer in one of these levels and then judge exactly where it fits inside that level.

There’s also the question of how squashed grade boundaries are in a particular subject. The students most likely to experience a 'wrong' grade are those that end up close to a grade boundary. The narrower the grade boundaries, the more potential for unreliable grades.

Does this mean my grades are wrong and I should get a 'review of marking'?

Exam boards have a really tough job. In 2018 they were responsible for the marking of about 15 million scripts by 50,000 examiners. This blog by Ofqual explains how they make a huge effort to get grading right.

And in most cases they do – the majority of grades are spot-on. It’s also worth remembering that the unreliability of marking has as much chance of working in your favour as it does of disadvantaging you!

If you do choose to go for a review of marking in an arts or humanities subject, the very fact that judgement is involved means that a reviewer may accept a range of marks as tolerable – not just the exact one they would have given. In 2018 around 20% of reviews of marking resulted in a grade change. The figure for A-level English Literature was 18% and A-level Maths 19%.

If you are considering a review, try to get your paper back as soon as possible and go through it with a teacher. They will be able to advise on whether a review may be worthwhile.

And remember that the closer you are to the grade boundary above, the more likely your review will be successful.

What could be done to make marking more accurate?

Making all exams multiple choice would solve the problem. But would the sacrifice of all those ‘soft’ skills in arts and humanities subjects be worth it? Probably not.

Exam boards could invest more resources in the quality of marking. They do try to improve every year but could they do more? Are there sufficient checks on less experienced examiners? Could mark schemes do a better job of differentiating between answers? Are there enough face-to face meetings where examiners can argue through tricky answers with senior examiners?

Or is our exam system just too complicated, too unwieldy and expected to do an impossible job?

People are talking about this article Have your say