Have your say: Students can use mock results to appeal calculated grades

Watch
tam13
Badges: 11
Rep:
?
#1
Report Thread starter 1 month ago
#1
Here's where you can post a comment about our Students can use mock results to appeal calculated grades article.

Read the full Students can use mock results to appeal calculated grades article and join in the discussion by posting a message below.
0
reply
Bill V2
Badges: 14
Rep:
?
#2
Report 1 month ago
#2
I think that only accepting a mock result is odd - the way mocks are done between different schools varies wildly. Some know what topics are on it beforehand, whilst others sit purposefully difficult papers that are marked harshly, some people would have found mark schemes online, etc. Though, I suppose it can be a good thing for those students who would be affected most by the standardisation, if they have good mock results to prove that they were working at a higher grade.
0
reply
fallen_acorns
Badges: 20
Rep:
?
#3
Report 1 month ago
#3
There is a hell of a lot of very dishonest reporting going on.

The narrative from the BBC and a lot of the papers is "Students marked down" or "Students results downgraded"

To any normal person who doesn't follow whats going on.. it seems from reading the headlines and bulletins that students are receiving lower grades then they deserve...

This is absolutely false.

It should read: "Students marks downgraded from teachers predictions because teachers vastly overestimated the grades they would achieve" or "students marks corrected to reflect reality, rather than their teachers desires"

Students haven't recieved less than they deserve (overal, obviously some individual students will have been cheated out of grades.. but an equal number will have recieved grades higher than that they would have achieved had they sat the exam). The grades have still risen from last year, and the spread of grades is about what we would expect. The students on the whole haven't lost out...

But that's not the narrative the media and opposition want to push.. they want to pretend that teachers grades were actually a decent representative, and that students have been cheated out.

Remember this though - for every student who would have got a better grade had they sat the exam. There is another student who would have ****ed up the exam, and has been saved by the current situation.. You won't hear about the second group though, because no student's parents will phone up the media in outrage saying "My son probably would have gotten a C!! But he's actually been given an B, this is an outrage!"
1
reply
fallen_acorns
Badges: 20
Rep:
?
#4
Report 1 month ago
#4
A* Grades wen't up from 7.8% to 9%...
Including A grades, it went up from 25.5% to 27.9%

Passes in general, went up by 2.6%

Its just deeply dishonest to say that students (collective) have been downgraded or cheated out of their future...

They haven't. Their grades (as a group) have just been adjusted back to normal, after the teachers predictions turned out to be shockingly high.
0
reply
Canary84
Badges: 15
Rep:
?
#5
Report 1 month ago
#5
(Original post by fallen_acorns)
A* Grades wen't up from 7.8% to 9%...
Including A grades, it went up from 25.5% to 27.9%

Passes in general, went up by 2.6%

Its just deeply dishonest to say that students (collective) have been downgraded or cheated out of their future...

They haven't. Their grades (as a group) have just been adjusted back to normal, after the teachers predictions turned out to be shockingly high.
Yes, you can also see a similar trend in previous years from predicted vs actual grades
1
reply
fallen_acorns
Badges: 20
Rep:
?
#6
Report 1 month ago
#6
(Original post by Canary84)
Yes, you can also see a similar trend in previous years from predicted vs actual grades
I know. I really feel sorry for students this year, because many will have lost out unfairly.. but the dishonesty in the media reporting of this is just infuriating me.
0
reply
JSG29
Badges: 11
Rep:
?
#7
Report 1 month ago
#7
@fallen_acorns - you are accusing the media of pushing a false narrative, whilst peddling your own. The suggestion that because as a group, their results are a small improvement on previous years the system is fair is ********. Was keeping CAGs a perfect option? No, but arbitrarily downgrading students without bothering to assess whether they've been predicted too high is ridiculous, and downgrading students by 2 or 3 grades (and indeed upgrading other students) is frankly unjustifiable. How can you claim that grades are more accurate by being modulated based on a schools prior attainment than being left to the assessment of people who've known the students for at least 2 years? Imagine having been working at an A* standard for 2 years, but because your school haven't had anyone above a B in your subject in the last 3 years, you get a B. Additionally, private schools have done far better relatively than any other type of school, which is unacceptable (but unsurprising under the current government).

Personally, I think they should have set out clear requirements and requested each school (or even each teacher) to provide evidence of why they have given each grade to each student, and checked a couple of them per group (as they do in coursework moderation). Having not done that, I believe there is really no option other than to accept CAGs.

A final note - while the Welsh government have said there will be no fees for appeals, the English government has said it is down to the exam boards. Some exam boards have fees of £100 per appeal - schools in poorer areas simply will not be able to afford that.
0
reply
fallen_acorns
Badges: 20
Rep:
?
#8
Report 1 month ago
#8
(Original post by JSG29)
@fallen_acorns - you are accusing the media of pushing a false narrative, whilst peddling your own. The suggestion that because as a group, their results are a small improvement on previous years the system is fair is ********. Was keeping CAGs a perfect option? No, but arbitrarily downgrading students without bothering to assess whether they've been predicted too high is ridiculous, and downgrading students by 2 or 3 grades (and indeed upgrading other students) is frankly unjustifiable. How can you claim that grades are more accurate by being modulated based on a schools prior attainment than being left to the assessment of people who've known the students for at least 2 years? Imagine having been working at an A* standard for 2 years, but because your school haven't had anyone above a B in your subject in the last 3 years, you get a B. Additionally, private schools have done far better relatively than any other type of school, which is unacceptable (but unsurprising under the current government).

Personally, I think they should have set out clear requirements and requested each school (or even each teacher) to provide evidence of why they have given each grade to each student, and checked a couple of them per group (as they do in coursework moderation). Having not done that, I believe there is really no option other than to accept CAGs.

A final note - while the Welsh government have said there will be no fees for appeals, the English government has said it is down to the exam boards. Some exam boards have fees of £100 per appeal - schools in poorer areas simply will not be able to afford that.
as a group, the algorithmic grades were 2-3% off the last set of results (last year). The teachers grades were 14-20% off. From that we can assume that overall, the assigned grades are more representative for more students than teacher's grades would have been.

Does that mean that everyone will get what they deserved? No. Some students will be failed by the system, and for them there should be a really strong appeal system.

But one thing that your not accounting for is that there are many many students would always have been disappointed. Think about how many students each year fail to get the grades they need normally. There are loads, I can think of 5-6 just from my close friends back when I did my ALevels, who all got less than they wanted. There are always crying students, some who are shocked by how badly they did etc. Its just normal for results days.

The only difference this year, is that normally a student failing isn't news because its their own fault.. but this year none of the students who 'failed' know if they would have or not... equally many of the students who 'passed' may have failed had they sat the exam.

This will always be the case though, no matter what metric you use. You either give the vast majority of students the grade they 'want' (not deserve) which would be teachers grades, and are unrealistic and would lead to a crazy level of grade inflation. Or you take one of the metrics or a combination of them and try and estimate grades, in which case no matter how you do it, some students will get less than they deserve, and some will get more. We will never know who was who.. and students will never know if they were unlucky or lucky. Its not fair at all, but without making students sit the exams (which probably sohuld have happened) there is no fair way to do this.
Last edited by fallen_acorns; 1 month ago
0
reply
JSG29
Badges: 11
Rep:
?
#9
Report 1 month ago
#9
(Original post by fallen_acorns)
as a group, the algorithmic grades were 2-3% off the last set of results (last year). The teachers grades were 14-20% off. From that we can assume that overall, the assigned grades are more representative for more students than teacher's grades would have been.
Sorry, but that does not follow. If we took the set of grades from last year and assigned them alphabetically to students, that would be exactly the same overall as last year, but would clearly not be more representative for more students.
(Original post by fallen_acorns)
Does that mean that everyone will get what they deserved? No. Some students will be failed by the system, and for them there should be a really strong appeal system.
And there isn't. The only appeal system that looks like it will be available is mocks. So anyone who missed mocks can't appeal, anyone who's school didn't do mocks can't appeal, anyone who's home schooled can't appeal. Additionally mocks are clearly nowhere near standard, some schools let students know what's going to be on the exam, I've known people when I was at school to look up mark schemes for mocks - how can you use that as a basis for a grade?
Sure, you have the October exams available, but that means you have to essentially lose a year.
(Original post by fallen_acorns)
But one thing that your not accounting for is that there are many many students would always have been disappointed. Think about how many students each year fail to get the grades they need normally. There are loads, I can think of 5-6 just from my close friends back when I did my ALevels, who all got less than they wanted. There are always crying students, some who are shocked by how badly they did etc. Its just normal for results days.

The only difference this year, is that normally a student failing isn't news because its their own fault.. but this year none of the students who 'failed' know if they would have or not... equally many of the students who 'passed' may have failed had they sat the exam.

This will always be the case though, no matter what metric you use. You either give the vast majority of students the grade they 'want' (not deserve) which would be teachers grades, and are unrealistic and would lead to a crazy level of grade inflation. Or you take one of the metrics or a combination of them and try and estimate grades, in which case no matter how you do it, some students will get less than they deserve, and some will get more. We will never know who was who.. and students will never know if they were unlucky or lucky. Its not fair at all, but without making students sit the exams (which probably sohuld have happened) there is no fair way to do this.
But at least usually people have passed or failed on their own merit. There is a whole other debate to be had about whether exams are the best way to assess a 2 year course, but that isn't the current discussion. Simply determining grades based on previous assessments of people from the same area is surely not an acceptable method of examination. Sure, CAGs aren't a great option. But they are massively superior to the government's system.
0
reply
decaprisun
Badges: 7
Rep:
?
#10
Report 1 month ago
#10
(Original post by fallen_acorns)
There is a hell of a lot of very dishonest reporting going on.

The narrative from the BBC and a lot of the papers is "Students marked down" or "Students results downgraded"

To any normal person who doesn't follow whats going on.. it seems from reading the headlines and bulletins that students are receiving lower grades then they deserve...

This is absolutely false.

It should read: "Students marks downgraded from teachers predictions because teachers vastly overestimated the grades they would achieve" or "students marks corrected to reflect reality, rather than their teachers desires"

Students haven't recieved less than they deserve (overal, obviously some individual students will have been cheated out of grades.. but an equal number will have recieved grades higher than that they would have achieved had they sat the exam). The grades have still risen from last year, and the spread of grades is about what we would expect. The students on the whole haven't lost out...

But that's not the narrative the media and opposition want to push.. they want to pretend that teachers grades were actually a decent representative, and that students have been cheated out.

Remember this though - for every student who would have got a better grade had they sat the exam. There is another student who would have ****ed up the exam, and has been saved by the current situation.. You won't hear about the second group though, because no student's parents will phone up the media in outrage saying "My son probably would have gotten a C!! But he's actually been given an B, this is an outrage!"
As a student downgraded by 2 grades, this response really pisses me off, i know some of my friends that got straight A*'s when they would never have gotten that in real exams, then there's people like me who ended up standardised completely on another level. Then mock grades are in your face to say "yup we screwed you over" because you can't even appeal to get a a grade higher than that.

What do you mean to "reflect reality" no one even sat their tests ffs, individuals shouldn't be downgraded just to meet a data point then have the rest receive their predicted's.we were all put "BEST SCENARIO" so the only thing ****ed up is how you've just neglected 280k students stressing over their results due to a deadass system
1
reply
fallen_acorns
Badges: 20
Rep:
?
#11
Report 4 weeks ago
#11
(Original post by decaprisun)
As a student downgraded by 2 grades, this response really pisses me off, i know some of my friends that got straight A*'s when they would never have gotten that in real exams, then there's people like me who ended up standardised completely on another level. Then mock grades are in your face to say "yup we screwed you over" because you can't even appeal to get a a grade higher than that.

What do you mean to "reflect reality" no one even sat their tests ffs, individuals shouldn't be downgraded just to meet a data point then have the rest receive their predicted's.we were all put "BEST SCENARIO" so the only thing ****ed up is how you've just neglected 280k students stressing over their results due to a deadass system
its hard to look objectively at the situation when your in the middle of it.

You though, are not representative of the majority. You are one of only 3.5% of students to receive a grade that was lowered by 2. The other 96.5% did not. Is that unfair? Yes, 100% - I hope you can appeal, and go up to your mock grade.
The point though is that no matter what metric you use, it will be unfair. There is no fair way of doing this without sitting exams. You can pick any of: Mock results, coursework results, school performance, past national exam results, predicted grades, etc. make any combination of them, in any way you like and some students will get less than they deserve, and some will get more.

Knowing that, you only have two options: A, you try to create the fairest combination and then give appeals to students who it fails.. and B, you give every student their best case scenario.

I favor A, because for the majority of students they will receive grades closer to reality, and as long as there is a way to help the minority who don't, it should work out for most. B devalues grades for a year, and creates a situation where no one can be confident to use the grades as any form of useful metric as they are objectively out by a significant margin.

Its good that you mention students who got grades they didn't deserve though - most people forget them. Plenty of students benefited hugely from this, just as many have been failed by it.
0
reply
fallen_acorns
Badges: 20
Rep:
?
#12
Report 4 weeks ago
#12
(Original post by JSG29)
Sorry, but that does not follow. If we took the set of grades from last year and assigned them alphabetically to students, that would be exactly the same overall as last year, but would clearly not be more representative for more students.

And there isn't. The only appeal system that looks like it will be available is mocks. So anyone who missed mocks can't appeal, anyone who's school didn't do mocks can't appeal, anyone who's home schooled can't appeal. Additionally mocks are clearly nowhere near standard, some schools let students know what's going to be on the exam, I've known people when I was at school to look up mark schemes for mocks - how can you use that as a basis for a grade?
Sure, you have the October exams available, but that means you have to essentially lose a year.

But at least usually people have passed or failed on their own merit. There is a whole other debate to be had about whether exams are the best way to assess a 2 year course, but that isn't the current discussion. Simply determining grades based on previous assessments of people from the same area is surely not an acceptable method of examination. Sure, CAGs aren't a great option. But they are massively superior to the government's system.
60% of grades were given exactly in line with teachers grades, and 96% were given inline or within one grade lower of the teachers grades. Your first comparison doesn't work because we aren't comparing two separate scenarios, we are comparing one scenario with two versions. The first raw, teacher grades, and the second, statistically adjusted teachers grades. Because they draw from the same original data point, its easy to say that the one that is 20% off, is less reliable overall than the one that is 2-3% off.

Would it be possible to create an incredibly unreliable data set that's 0% off? Yes. But its irrelevant in a comparison between two modellings coming from a single set of data. The reason you can say that the set that is 2-3% off, is more reliable in this case, isn't because being closer to last years results = accurate no matter what (what you are trying to disprove with your analogy), its because when you have two models that draw from one set, you use multiple metrics to measure their accuracy. Generally you will be looking at:

Overall grade levels, Spread of grade distribution, The number of individual students who hit their targets.

In our case, we know that teachers grades are good at the third (all be it unrealistically good), but produce unacceptable results in the first two. The adjusted results aim to solve this by adjusting a minority of grades downwards (60% remain the same), to create a set of results that best hits all 3 criteria. The new results match the third criteria far less, but reflect the other two far more. Individually students are annoyed because its the third criteria that they care about.. but nationally the first two are really important, so the goverment/exam boards want to take them into account.

Now back to your example - you can create hypothetical models that hit one of each of the 3 points and fail the other two. You can match the overal levels to last year, but give each students inverse grades.. you can have perfect grade distribution but give all the good grades to schools in one area, and all the bad to another.. and you can give all students their optimum target grades no matter how unrealistic or how many would have missed them in reality. All three would be wrong, but none of them negate the use of the three measures as tests to measure the reliability of a model.

One of those examples, of how you can purposely make one criteria perfect at the expense of the others, is exactly what you want though...
0
reply
fallen_acorns
Badges: 20
Rep:
?
#13
Report 4 weeks ago
#13
The short and better written version of what I posted above is:

In a system that is going to be judged on multiple metrics, the ability to create a hypothetical result that is perfect on one, but random on the rest, does not mean that when used in combination with the rest, that metric isn't a reliable way to test the accuracy of the results.
0
reply
Dechante
Badges: 14
Rep:
?
#14
Report 4 weeks ago
#14
Hmmmm I feel like it's a hard one. The system is defo unfair in my mock I got an A in biology and a B in chemistry. I was working towards As and a few A*s in chem. My CAGs were A and B in for biology and chemistry respectively. I ended up getting a B in biology and C in chemistry so personally I will appeal my results because I haven't been getting a C all year.

However, I can understand how people may think it's a ridiculous system as some people cheat in mocks and some people didn't take them seriously or like my school they deliberately mark you down and are strict to try and motivate you to work harder.
0
reply
JSG29
Badges: 11
Rep:
?
#15
Report 4 weeks ago
#15
(Original post by fallen_acorns)
60% of grades were given exactly in line with teachers grades, and 96% were given inline or within one grade lower of the teachers grades. Your first comparison doesn't work because we aren't comparing two separate scenarios, we are comparing one scenario with two versions. The first raw, teacher grades, and the second, statistically adjusted teachers grades. Because they draw from the same original data point, its easy to say that the one that is 20% off, is less reliable overall than the one that is 2-3% off.

Would it be possible to create an incredibly unreliable data set that's 0% off? Yes. But its irrelevant in a comparison between two modellings coming from a single set of data. The reason you can say that the set that is 2-3% off, is more reliable in this case, isn't because being closer to last years results = accurate no matter what (what you are trying to disprove with your analogy), its because when you have two models that draw from one set, you use multiple metrics to measure their accuracy. Generally you will be looking at:

Overall grade levels, Spread of grade distribution, The number of individual students who hit their targets.

In our case, we know that teachers grades are good at the third (all be it unrealistically good), but produce unacceptable results in the first two. The adjusted results aim to solve this by adjusting a minority of grades downwards (60% remain the same), to create a set of results that best hits all 3 criteria. The new results match the third criteria far less, but reflect the other two far more. Individually students are annoyed because its the third criteria that they care about.. but nationally the first two are really important, so the goverment/exam boards want to take them into account.

Now back to your example - you can create hypothetical models that hit one of each of the 3 points and fail the other two. You can match the overal levels to last year, but give each students inverse grades.. you can have perfect grade distribution but give all the good grades to schools in one area, and all the bad to another.. and you can give all students their optimum target grades no matter how unrealistic or how many would have missed them in reality. All three would be wrong, but none of them negate the use of the three measures as tests to measure the reliability of a model.

One of those examples, of how you can purposely make one criteria perfect at the expense of the others, is exactly what you want though...
OK, I accept the model I gave yesterday was overly silly. But my point was the government's system was fundamentally unfair. Consider the following model based on the same data set of CAGs:
Sort students by household income. The top 60% receive the grade given. The bottom 40% receive 1 grade lower.
Clearly, that would be unacceptable. But it arguably meets your 3 criteria better than either model. So should we have used it instead?

I would argue the best measure of success of a model is how likely a student is to receive the grades they have earned. And downgrading (or upgrading) students grades based on prior performance of their school is in my view not likely to improve that. I would have (as I said earlier) supported assessing the grades given by teachers by requiring them to submit evidence of why they have selected that grade for a few randomly selected students, as is done for coursework (assessed internally, moderated externally). But I cannot support the government's methodology.

W.r.t. students being upgraded from CAGs, I cannot see how that could possibly be justified. Given that this whole debate is sparked by the fact that the government has decided that teachers' grades are too generous, how can you arbitrarily decide that some students were given too low a grade - ~74 students were given a grade 3 grades higher than their CAG - that is simply ridiculous, and makes a mockery of the whole system.

As a final note, I mentioned previously that the government had said that appeal costs were decided by exam boards - I am glad to have seen that the government has announced it will cover the costs of appeals. However, AFAIK there is yet to be any announcement on what proof you need to provide of mock grades, and I don't believe the idea works well anyway for reasons previously given.
1
reply
fallen_acorns
Badges: 20
Rep:
?
#16
Report 4 weeks ago
#16
BBC has a very fair piece out on the whole thing today:

https://www.bbc.co.uk/news/education-53787203

It shows the problem well.. we can all see that the system is flawed and unfair, but then it also points out that teachers grades could be up to 38% above any other years. Which is just far to high to be used universally? So what to do?

For me, keep things as they are, but make the appeal system easier. Let students use graded work and coursework as proof as well as mocks, and really try and help/catch the students that have been failed by this.
1
reply
decaprisun
Badges: 7
Rep:
?
#17
Report 4 weeks ago
#17
(Original post by fallen_acorns)
its hard to look objectively at the situation when your in the middle of it.

You though, are not representative of the majority. You are one of only 3.5% of students to receive a grade that was lowered by 2. The other 96.5% did not. Is that unfair? Yes, 100% - I hope you can appeal, and go up to your mock grade.
The point though is that no matter what metric you use, it will be unfair. There is no fair way of doing this without sitting exams. You can pick any of: Mock results, coursework results, school performance, past national exam results, predicted grades, etc. make any combination of them, in any way you like and some students will get less than they deserve, and some will get more.

Knowing that, you only have two options: A, you try to create the fairest combination and then give appeals to students who it fails.. and B, you give every student their best case scenario.

I favor A, because for the majority of students they will receive grades closer to reality, and as long as there is a way to help the minority who don't, it should work out for most. B devalues grades for a year, and creates a situation where no one can be confident to use the grades as any form of useful metric as they are objectively out by a significant margin.

Its good that you mention students who got grades they didn't deserve though - most people forget them. Plenty of students benefited hugely from this, just as many have been failed by it.
I favour C, i get the grades i was predicted and we all gucci couldn't care less about others
0
reply
X

Quick Reply

Attached files
Write a reply...
Reply
new posts
Back
to top
Latest
My Feed

See more of what you like on
The Student Room

You can personalise what you see on TSR. Tell us a little about yourself to get started.

Personalise

How are you feeling ahead of starting University?

I am excited and looking forward to starting (44)
13.46%
I am excited but have some apprehension around Covid-19 measures (46)
14.07%
I am concerned I will miss out on aspects of the uni experience due to new measures (119)
36.39%
I am concerned the Covid-19 measures at uni are not strong enough (33)
10.09%
I am nervous and feel I don't have enough information (67)
20.49%
Something else (let us know in the thread!) (18)
5.5%

Watched Threads

View All