The Student Room Group

Is AI good at marking

Hi, I wanted to ask if anyone has any experience with AI markers specifically for edexcel economics.

So far I have seen astarai.co.uk and econai.site (not an AD haven't paid for it myself yet)

I know AI isn't perfect but for econ which is already subjective with the banded answers I feel it is the best cheap option when a teacher is not available

Do you guys find that they are accurate and are worth the price?

Reply 1

Original post
by doorrightsix
Hi, I wanted to ask if anyone has any experience with AI markers specifically for edexcel economics.

So far I have seen astarai.co.uk and econai.site (not an AD haven't paid for it myself yet)

I know AI isn't perfect but for econ which is already subjective with the banded answers I feel it is the best cheap option when a teacher is not available

Do you guys find that they are accurate and are worth the price?


I wouldn't trust AI to review anything. They can be very wishy washy and you should be weary of any feedback they give you being a. Incorrect, or b. Overly positive because it is trying to please you.

Is there no one that can help you mark your work? Do you think that if you look at the mark schemes, you can mark your own work honestly? It wouldn't be ideal, but... you'd probably be able to be more critical than any AI.

Reply 2

Original post
by doorrightsix
Hi, I wanted to ask if anyone has any experience with AI markers specifically for edexcel economics.
So far I have seen astarai.co.uk and econai.site (not an AD haven't paid for it myself yet)
I know AI isn't perfect but for econ which is already subjective with the banded answers I feel it is the best cheap option when a teacher is not available
Do you guys find that they are accurate and are worth the price?

Hey,
I’m generally cautious with AI markers for exam subjects, mainly because of academic misconduct concerns. You are often uploading school work or exam-style answers and it is not always clear how those responses are stored, reused or whether they end up as training data. That uncertainty alone makes me hesitant, especially with paid platforms that are not fully transparent.

From an AI perspective (I’m a computer science student, so this is based on how these systems typically work), these tools can be useful but have limits. Economics marking is banded and subjective, which actually makes it easier for AI than something highly technical but the models are still estimating rather than truly “marking” like an examiner. Training data is usually a mix of public mark schemes, examiner reports and synthetic examples so accuracy can vary.

They can be genuinely helpful when no teacher support is available, especially for identifying missing evaluation, weak application or poor structure. As a feedback tool rather than a definitive marker, they add value. They are also much cheaper than a tutor and faster.

I would not treat the mark as reliable, especially for 12- and 25-mark questions. Paid versions are not necessarily “better” in a meaningful way; often you are paying for convenience and formatting rather than substantially improved judgement. It is fine as a supplement but not something I would rely on heavily and I would think carefully before paying or uploading large amounts of assessed work. I recommend always finding some time to go over your work with lecturers, professors or teachers to properly mark them and provide feedback as they are often the teachers who put themselves up to be markers over summer and have the experience.

Hope that helps,
Aura (Uni of Staffs)

Reply 3

Original post
by StaffsRep Aura
Hey,
I’m generally cautious with AI markers for exam subjects, mainly because of academic misconduct concerns. You are often uploading school work or exam-style answers and it is not always clear how those responses are stored, reused or whether they end up as training data. That uncertainty alone makes me hesitant, especially with paid platforms that are not fully transparent.
From an AI perspective (I’m a computer science student, so this is based on how these systems typically work), these tools can be useful but have limits. Economics marking is banded and subjective, which actually makes it easier for AI than something highly technical but the models are still estimating rather than truly “marking” like an examiner. Training data is usually a mix of public mark schemes, examiner reports and synthetic examples so accuracy can vary.
They can be genuinely helpful when no teacher support is available, especially for identifying missing evaluation, weak application or poor structure. As a feedback tool rather than a definitive marker, they add value. They are also much cheaper than a tutor and faster.
I would not treat the mark as reliable, especially for 12- and 25-mark questions. Paid versions are not necessarily “better” in a meaningful way; often you are paying for convenience and formatting rather than substantially improved judgement. It is fine as a supplement but not something I would rely on heavily and I would think carefully before paying or uploading large amounts of assessed work. I recommend always finding some time to go over your work with lecturers, professors or teachers to properly mark them and provide feedback as they are often the teachers who put themselves up to be markers over summer and have the experience.
Hope that helps,
Aura (Uni of Staffs)

Thanks for the reply

So would you say chatgpt or gemini is the same level as these paid services, and If so how should I go about engineering a prompt to do so?

Reply 4

Original post
by Scotland Yard
I wouldn't trust AI to review anything. They can be very wishy washy and you should be weary of any feedback they give you being a. Incorrect, or b. Overly positive because it is trying to please you.
Is there no one that can help you mark your work? Do you think that if you look at the mark schemes, you can mark your own work honestly? It wouldn't be ideal, but... you'd probably be able to be more critical than any AI.

I am not an economics teacher and with banded answers I don't really even know where to start on feedback as it is not like I would write something I thought wouldn't get marks

I don't really care how my data is stored I have nothing to hide anyways

Reply 5

Original post
by doorrightsix
I am not an economics teacher and with banded answers I don't really even know where to start on feedback as it is not like I would write something I thought wouldn't get marks
I don't really care how my data is stored I have nothing to hide anyways

I would recommend to fine-tune your settings if you do use the mentioned tools to ensure you have "developer mode" enabled or training data disabled regardless. Chat memory usually isn't enough. Specialised tools often use training data sets catered towards their main goal, so can be quite bad at reading anything between the lines that is not under that exact domain e.g. mentioned in a past mark scheme. They are also fine-tuned to be better at preciseness for that subject. Generalised AI models have their advantages as they can do a lot more and are often trained on more data, but this is where quality sometimes degrades as they can use data from untrustworthy sites such as reddit... chatgpt and gemini are famous for this.

The perfect balance would be to use a specialised tool and a generalised tool, then compare the differences in answers. They can differ quite a bit and sometimes fill in any gaps you may have. Chat-GPT is quite good at general marking guidance, while specialised tools will try to give more exact marks by digging deeper into specifics, you may notice they overmark for answers full of keywords/buzzwords or undermark when exact things are not stated. Use the average and advice of both.

You cannot ever trust the opinion of one person, same thing applies to AI.

Quick Reply

How The Student Room is moderated

To keep The Student Room safe for everyone, we moderate posts that are added to the site.