The Student Room Group

Cambridge University to introduce written admissions tests

Scroll to see replies

Original post by jneill
The point about this news is not the interview. The interview is well established.

The point is about the use of Oxford-style pre-interview tests for many courses...

This gives rise to the possible implication that Cambridge will interview fewer applicants as a result (like Oxford).

We will have to wait for Cambridge to clarify that in due course.


Posted from TSR Mobile

Probably not.............for the time being, anyway.

The new tests will form part of Cambridge's assessments of candidates, rather than being a method of selecting students for interview, Dr Lucy said.

http://www.telegraph.co.uk/education/educationnews/12136030/Cambridge-University-brings-back-entrance-exams-amid-struggle-to-identify-brightest-students.html

It may become some kind of endurance test for Cambridge. How long will they be able to cope with increased amount and more complicated nature of work in the limited time schedule. :tongue:
Original post by jneill
I think so: http://www.undergraduate.study.cam.ac.uk/applying/admissions-assessments

"* No advance preparation will be needed, other than revision of relevant recent subject knowledge where appropriate."

Posted from TSR Mobile


Fair enough. I really dislike Oxbridge's line of no advance preparation is needed. It might not be in a technical sense, but it's almost certainly required to be competitive. A very naive shamika took that at face value at 17 (luckily it didn't cost me an offer).
(edited 8 years ago)
Reply 122
Yes !!,
Original post by shamika
Contrast that to Gavin Lowe (Oxford CompSci admissions tutor) who repeatedly said that he considers interviews to be more predictive than MAT or academic history. (This was in the TSR's most recent MAT prep thread.)

Don't think he had done a proper study though.


It's actually rather hard to do a proper study, because of the missing data problem: we don't know how candidates we rejected would have got on if they had been accepted. The only way to do a proper study would be to accept candidates who had done badly on the MAT or interviews (and who would normally be rejected) and seeing how they would cope. Of course, we're not going to do that.

In most cases, performance on the MAT, performance in interviews and performance during the degree are fairly compatible. However, my experience is that where a student's performance during their degree has been out of line with MAT or interview scores, it's the latter that is more reliable. (An exception to the above is that good teaching can help candidates do better on the MAT, and to a lesser extent in interviews, than those with equal potential but who received less good teaching; we can adjust for that to a certain extent.)

Gavin
Original post by gavinlowe
It's actually rather hard to do a proper study, because of the missing data problem: we don't know how candidates we rejected would have got on if they had been accepted. The only way to do a proper study would be to accept candidates who had done badly on the MAT or interviews (and who would normally be rejected) and seeing how they would cope. Of course, we're not going to do that.

In most cases, performance on the MAT, performance in interviews and performance during the degree are fairly compatible. However, my experience is that where a student's performance during their degree has been out of line with MAT or interview scores, it's the latter that is more reliable. (An exception to the above is that good teaching can help candidates do better on the MAT, and to a lesser extent in interviews, than those with equal potential but who received less good teaching; we can adjust for that to a certain extent.)

Gavin


Except it is easy* to answer the question: "how correlated are interview scores to degree performance (for accepted students?)", which is really the only question people are interested in.

*More accurately, easy if the data is readily available
Original post by jneill
No.

The whole point is they won't require any extra learning.


But I wonder how they'll ensure this is the case.

Each specification whether it be AQA, WJEC, Edexcel, OCR etc is not the same so there is a chance they have something in the test covered by the OCR and AQA specification but not if you do Edexcel putting the Edexcel student at a disadvantage for instance.
Original post by Excuse Me!
But I wonder how they'll ensure this is the case.

Each specification whether it be AQA, WJEC, Edexcel, OCR etc is not the same so there is a chance they have something in the test covered by the OCR and AQA specification but not if you do Edexcel putting the Edexcel student at a disadvantage for instance.


Possibly by having a reasonably wide range of questions but only a few need to be answered...

All will become clearer when the more detailed test info becomes available soon.
(edited 8 years ago)
Original post by shamika
Except it is easy* to answer the question: "how correlated are interview scores to degree performance (for accepted students?)", which is really the only question people are interested in.

*More accurately, easy if the data is readily available


The CAT (Christ's Admissions Tutor) has mentioned that one challenge is that different interviewers use the "score sheet" differently. Some don't score the interview at all per se, they just give an overall mark assessing the quality of the candidate's total application. Others might put the interview score but one interviewer might rarely give high marks and their "7" might be another interviewer's "8".

Not ideal when trying to do a stastical analysis...
Original post by jneill
The CAT (Christ's Admissions Tutor) has mentioned that one challenge is that different interviewers use the "score sheet" differently. Some don't score the interview at all per se, they just give an overall mark assessing the quality of the candidate's total application. Others might put the interview score but one interviewer might rarely give high marks and their "7" might be another interviewer's "8".

Not ideal when trying to do a stastical analysis...


Quite. Interpreting straight correlations from data like this is a bit of a nightmare.

A modern approach would be at least to model the "rater effect" as a random intercept in some sort of ordinal regression. Even this is probably not sufficient, as one might have to take into account subject effects.
Original post by jneill
The point is about the use of Oxford-style pre-interview tests for many courses...

This gives rise to the possible implication that Cambridge will interview fewer applicants as a result (like Oxford). .


Oxford interviews fewer, but to my knowledge performs more interviews per candidate it does invite. I think the difference is due to different use of interviews (investigate all possibilities versus extensively investigating the strong possibilities) rather than anything to do with admissions tests.

Also to do with the degree of college autonomy (less at Oxford).

Though maybe these are things up for debate at this time as well.

Original post by shamika
Fair enough. I really dislike Oxbridge's line of no advance preparation is needed. It might not be in a technical sense, but it's almost certainly required to be competitive. A very naive shamika took that at face value at 17 (luckily it didn't cost me an offer).


I took that at face value, only doing one past paper and nil else in preparation, and got 100% in BMAT section 2, 86% overall.

I think extensive preparation gives minimal advantage. People fret over having to memorise all the tiny details that came up in the answers of past papers, when in reality the idea is that you use your understanding to derive the answer from basic principals. Memorisation doesn't work. Being good at your subject, does.

Original post by Excuse Me!
But I wonder how they'll ensure this is the case.

Each specification whether it be AQA, WJEC, Edexcel, OCR etc is not the same so there is a chance they have something in the test covered by the OCR and AQA specification but not if you do Edexcel putting the Edexcel student at a disadvantage for instance.


Similar to how they do it at the moment? E.g. STEP and BMAT? Essentially asking difficult questions based on basic things.
Original post by nexttime
Oxford interviews fewer, but to my knowledge performs more interviews per candidate it does invite. I think the difference is due to different use of interviews (investigate all possibilities versus extensively investigating the strong possibilities) rather than anything to do with admissions tests.

Also to do with the degree of college autonomy (less at Oxford).

Though maybe these are things up for debate at this time as well.



I took that at face value, only doing one past paper and nil else in preparation, and got 100% in BMAT section 2, 86% overall.

I think extensive preparation gives minimal advantage. People fret over having to memorise all the tiny details that came up in the answers of past papers, when in reality the idea is that you use your understanding to derive the answer from basic principals. Memorisation doesn't work. Being good at your subject, does.



Similar to how they do it at the moment? E.g. STEP and BMAT? Essentially asking difficult questions based on basic things.


Totally agree.
I've read somewhere (or was it CAT?) that if an interviewer detect a candidate has seen a similar question/problem before and has been coached/practised, they can swiftly switch the question to something else, so that they can make sure they'll test the candidate on something new and unfamiliar to see how they work it out. And that's the main thing they want to do in interviews.
Original post by Gregorius
Quite. Interpreting straight correlations from data like this is a bit of a nightmare.

A modern approach would be at least to model the "rater effect" as a random intercept in some sort of ordinal regression. Even this is probably not sufficient, as one might have to take into account subject effects.


What I gather from things I've read/heard, the interview score is not so much more than 'a note/memo' in numerical form for each interviewers/DoS to use to remind them about each candidate's 'performance' at their interview.
It's really the last piece of jigsaw puzzle for them when they try to build a whole (3D) picture of how each candidate is like as an applicant. Not really for comparison between applicants on the basis of interview scores. And each interview can be slightly different from candidate to candidate, so you can't really compare them on a same basis.
So trying to find a correlation between interview score and future Tripos performance is a bit meaningless, I think.

That's my understanding anyway.
(edited 8 years ago)
Original post by jneill
The CAT (Christ's Admissions Tutor) has mentioned that one challenge is that different interviewers use the "score sheet" differently. Some don't score the interview at all per se, they just give an overall mark assessing the quality of the candidate's total application. Others might put the interview score but one interviewer might rarely give high marks and their "7" might be another interviewer's "8".

Not ideal when trying to do a stastical analysis...


Again, agreed. I am surprised that CAT was so blunt about the problem, because it identifies a fundamental flaw in the admissions process. Unless AT's are (implicitly) allowing for such bias, how can you select the best students for offer?
Original post by Gregorius
Quite. Interpreting straight correlations from data like this is a bit of a nightmare.

A modern approach would be at least to model the "rater effect" as a random intercept in some sort of ordinal regression. Even this is probably not sufficient, as one might have to take into account subject effects.


More fundamental flaw: you have tiny amounts of data, anything beyond a linear regression is almost certainly spurious.

Spoiler

Original post by shamika
More fundamental flaw: you have tiny amounts of data, anything beyond a linear regression is almost certainly spurious.

Spoiler



Not quite sure I follow; Cambridge interviews thousands of students. Decent sample size especially compared to the work I usually do!
Original post by vincrows
Totally agree.
I've read somewhere (or was it CAT?) that if an interviewer detect a candidate has seen a similar question/problem before and has been coached/practised, they can swiftly switch the question to something else, so that they can make sure they'll test the candidate on something new and unfamiliar to see how they work it out. And that's the main thing they want to do in interviews.


They did that for me (though admittedly at Oxford). They gave me a setof chromosomes and literally asked me whether I'd heard of Down's syndrome, and I very hesitantly answered that I thought i had and that all i knew was it might be the one with 3 chromosomes. That alone was sufficient for them to completely cut the question and move on!
Original post by nexttime
They did that for me (though admittedly at Oxford). They gave me a setof chromosomes and literally asked me whether I'd heard of Down's syndrome, and I very hesitantly answered that I thought i had and that all i knew was it might be the one with 3 chromosomes. That alone was sufficient for them to completely cut the question and move on!


Yeah, you're up against some of the top academics in the field in UK or even in the world, so there's no escaping! :biggrin:
Original post by Gregorius
Not quite sure I follow; Cambridge interviews thousands of students. Decent sample size especially compared to the work I usually do!


I was thinking college and course combo, but that doesn't really make much sense!
Original post by shamika
I was thinking college and course combo, but that doesn't really make much sense!


Oh I think it does. You'd like to control for the rate effect mainly, but chucking in a course effect and a college effect would be fun. My computer is salivating at the thought...
Original post by Gregorius
Oh I think it does. You'd like to control for the rate effect mainly, but chucking in a course effect and a college effect would be fun. My computer is salivating at the thought...


To clarify: it didn't make much sense to say there isn't a huge amount of data because as you say, there will be lots of data at the course level over time.

It obviously does make sense to do an analysis at the course/college level. I was thinking you could use that as a proxy for the rating effect, but that would only really work if there is a single consistent interviewer over the period of the study.

If this isn't making sense, it's because I'm being particularly bad at making my point today, you're not missing anything profound!

Quick Reply

Latest

Trending

Trending