The Student Room Group

Interviews Research

If you want to take part, please fill out the form below. Note that you will be followed up on in January so please don't fill it in if you'd be unwilling to report a rejection.

https://docs.google.com/forms/d/e/1FAIpQLScaTxRNsHqyKcYR1WbE7_NtWWWVfE15hcXKp29yymnb19Wv1A/viewform

Scroll to see replies

Background post:

In 2016, Doones did a very similar thing in order to produce this graph, the point of which is that applicants are bad at judging how well their interviews went, and that you shouldn't be too worried about an interview you think went badly:
Cambridge Interview Outcomes (updated).jpg

I'm trying to recreate this to make it a bit more solid, as well as also considering how hard applicants felt their interviews were.
Hi - would you consider making another for Oxbridge as a whole? It might give a larger data pool.
Original post by DeBeauvoir2
Hi - would you consider making another for Oxbridge as a whole? It might give a larger data pool.

I believe @azby1098 is collecting similar data publicly on the Oxford thread so it certainly might be possible to combine the datasets. I might get round to it but someone else could also have a look if they want.
Original post by Theloniouss
I believe @azby1098 is collecting similar data publicly on the Oxford thread so it certainly might be possible to combine the datasets. I might get round to it but someone else could also have a look if they want.


Hello!

Yes I am. Not as detailed and will be a much smaller data set, but I have set up an interview scoring system and will make the spreadsheet available to edit next Tuesday and users can add whether they were made an offer or rejected.
Original post by Theloniouss
If you want to take part, please fill out the form below. Note that you will be followed up on in January so please don't fill it in if you'd be unwilling to report a rejection.

https://docs.google.com/forms/d/e/1FAIpQLScaTxRNsHqyKcYR1WbE7_NtWWWVfE15hcXKp29yymnb19Wv1A/viewform


Hi, do you know what form we post the decisions on? I can’t find it on the applicants thread
And results day is (I hope) over! If you could all fill out your offer details in the results form, that would be great, thanks:

https://docs.google.com/forms/d/1rYWnqVrHWaFoTBXoCrctajR3_wHIQgFR1jGfQlYeHi4/edit
Since you are doing a research on interviews, I guess this might be a good place to ask the following.

Hi,

I applied to Trinity to read mathematics starting in October 2022. I chose Trinity because it gives the most full scholarships. I completely failed the interview, my brain basically stopped working and so I ended up doing a curve sketching for almost half an hour. At the end I received an offer from another college which probably doesn't give full scholarships, so I won't be able to continue education there.

So, I am interested to hear whether anybody knows how much they value interview performance. I think that apart from it, my application has been almost perfect - International Math Olympiad Silver, twice attending a six-week-long summer math program. So, does anybody know how interview performance compares to other parts of an application? Do they value an interview the most?

Thank you in advance!
Original post by mathskeptic
Since you are doing a research on interviews, I guess this might be a good place to ask the following.

Hi,

I applied to Trinity to read mathematics starting in October 2022. I chose Trinity because it gives the most full scholarships. I completely failed the interview, my brain basically stopped working and so I ended up doing a curve sketching for almost half an hour. At the end I received an offer from another college which probably doesn't give full scholarships, so I won't be able to continue education there.

So, I am interested to hear whether anybody knows how much they value interview performance. I think that apart from it, my application has been almost perfect - International Math Olympiad Silver, twice attending a six-week-long summer math program. So, does anybody know how interview performance compares to other parts of an application? Do they value an interview the most?

Thank you in advance!

Interview scores are usually very important. No official data on this, really, but you could probably do some analysis on FOI requests to determine what the best predictors of receiving an offer are.
Original post by Theloniouss
Interview scores are usually very important. No official data on this, really, but you could probably do some analysis on FOI requests to determine what the best predictors of receiving an offer are.

Thank you for the quick reply! What is FOI?
Original post by mathskeptic
Thank you for the quick reply! What is FOI?

freedom of information requests - you find them on whatdotheyknow.com
Original post by Theloniouss
freedom of information requests - you find them on whatdotheyknow.com

Thanks! Is it possible to ask them for the grade of personal statement and parts of application distinct from interview?
Original post by mathskeptic
Thanks! Is it possible to ask them for the grade of personal statement and parts of application distinct from interview?

Those aren't graded
Quick update on data collection for anyone wondering: I've sent out the second round of emails as well as the first round of PMs to non-respondents. So far the response isn't terrible, I have results for approximately 109 out of 154 initial respondents. I intend to start removing the initial respondents who didn't provide valid usernames or email addresses, and then I will send out a third round of emails and PMs in the coming week (attempting this time to correct misspelled usernames and email addresses) as well as leaving another week or so after that for the responses to stop trickling in. At that point I will do the data analysis and post it here.
Original post by Theloniouss
Quick update on data collection for anyone wondering: I've sent out the second round of emails as well as the first round of PMs to non-respondents. So far the response isn't terrible, I have results for approximately 109 out of 154 initial respondents. I intend to start removing the initial respondents who didn't provide valid usernames or email addresses, and then I will send out a third round of emails and PMs in the coming week (attempting this time to correct misspelled usernames and email addresses) as well as leaving another week or so after that for the responses to stop trickling in. At that point I will do the data analysis and post it here.

Thank you! Looking forward to the results, btw your email turned up in my spam so that may have prevented some people from seeing them!
I've had a brief look through the data and it might be a little more difficult than I thought. The results are close enough that I may have to learn new data analysis methods to get a meaningful answer. I will say that that alone (considering the sample size) suggests that it's close enough for the previous assumption to hold.
Original post by Theloniouss
I've had a brief look through the data and it might be a little more difficult than I thought. The results are close enough that I may have to learn new data analysis methods to get a meaningful answer. I will say that that alone (considering the sample size) suggests that it's close enough for the previous assumption to hold.

Wow, thank you as always:smile:
Hello everyone, the results are now in! Depressingly, it's inconclusive whether applicants can predict their interview scores with any accuracy, though I can safely say applicants can't predict them with high accuracy. The full dataset, as well as a (rushed and poor) attempt at proper analysis can be found below.

This has been fun and I might have to see you all again next year :smile:
The ability of Cambridge applicants to predict their interview scores
Theloniouss

Introduction
Post-interview stress is common among Cambridge (and Oxf*ord) applicants, in significant part because many applicants leave their interview with a very poor idea of how well they have performed. In 2016, user @Doonesbury conducted some research on this and found that the interview score predictions of unsuccessful and successful applicants are not meaningfully different, see Figure 1. It is my aim, with this research, to conduct the same test on a larger sample size. I also aim to account for both predicted score and perceived difficulty in order to determine whether this might serve as a more effective predictor of success.

I hypothesise that, in accordance with @Doonesbury’s results, there will be no significant difference between the predicted scores of successful and unsuccessful applicants. I expect that there will also be no difference between the groups in terms of the perceived difficulty of their interviews.


Figure 1


Methods

Data was collected via survey, with the initial round asking for applicants’ predicted interview scores and how difficult they found the interview, on a scale of 1 to 10. Applicants were identified by their email or TSR username, both of which were provided.

Invalid responses (where an applicant’s email address and TSR username could not be used to contact them, usually because of invalid addresses or email addresses for internal use only (such as school emails) to this survey were removed, and can be found, anonymised, in Appendix 1.

A second survey, available after decisions had come out asked whether applicants had received offers as well as for their course and college. Course and college data will not be used in this analysis due to the small sample sizes, but if similar surveys are conducted in future the data could be collated.

Second survey responses were matched to first survey responses by the email and TSR username provided. Where first survey responses could not be matched, they were classed as “non-respondents” and can be found in Appendix 1. The full dataset of complete responses may also be found in Appendix 1.

In addition to the novel data collection, I have also transcribed Doones’ data from the graph (Figure 1 and this transcription is also available in Appendix 1.

All results have been anonymised and individual data points are identified by ID number.


Results and Analysis

1. Results at a glance
The initial survey received 154 responses, of which 115 could be paired with a response to the second survey. 16 of the 154 responses contained errors making it impossible to contact the initial respondent and a further 23 responses could not be paired with a response to the second survey. The responses transcribed from Doones’ graph amounted to 63 complete responses. The below tables (Figure 2 contains key summary statistics from the main groups of interest, and the accompanying graph (Figure 3) shows the distributions of predicted scores and reported difficulties, divided into successful and unsuccessful applicants.


Figure 2




Figure 3: Bean plots of interview score and difficulty, allowing for easy comparison of the distributions.


2. Data clean-up
The data above then requires some further consideration before we can analyse it. The first question is how to treat the non-respondent and error categories. Both categories appear to differ from the successful and unsuccessful categories in potentially significant ways, such as in the mean score of non-respondents and the median difficulty for errors.

I would tend to ignore the errors, as I expect these would not be more likely to receive or not receive offers (though it could be argued that the inability to type your own email address suggests you are not Cambridge material). The non-respondents, however, are probably more likely to have not received offers, as applicants who were deselected are likely to be less willing to respond. I have discounted them anyway because it would be difficult to account for them.

Next we should consider whether to include Doones’ data or not. While its inclusion would increase our sample size, without knowing how the data was collected it’s probably not sensible to include it, as different methods of data collection might result in different styles of result. For example, Doones’ data uses a different scale to mine, which has resulted in fractional score predictions. As a result of this, and because there is no way to account for perceived difficulty, I will not be using Doones’ data in this analysis.


3. Data analysis
In order to analyse this data, I have used logistic regression. This accounts for the binary nature of the response variable and means certain assumptions typical of regression analysis can be ignored, like normality and homoscedasticity (equal variance). The assumptions, and extent to which each is met, of logistic regression are below:
1. Binary response variable the response variable is whether or not an offer is received
2. Independent observations almost certainly met, I can’t see how offer-holders could have influenced one another’s responses
3. No correlation between explanatory variables This assumption is slightly violated, as the two explanatory variables show weak correlation, as determined by Kendall’s tau correlation test, which was used because it accounts for tied ranks and doesn’t require normality (tau=-0.18, p=0.017). The correlation, however, is low and so logistic regression should still be valid.
4. Large sample size The sample size here is 115-138, which should be large enough for logistic regression.

3.1. Fitting the model
In fitting the model, I have initially assumed an interaction between interview difficulty and score it’s likely that these will influence each other so it makes sense to include an interaction. Since the AIC for this model is lower than the AIC for the model which excludes it (153.7 with the interaction, 154.3 without it), I have left the interaction in the model. The coefficients, std. errors and p-values for this model are below:


The plot below visualises these results, as well as the data which was fitted:

Figure 4: The shade of the graph indicates the predicted offer likelihood, with darker squares indicating higher likelihood of receiving an offer. The points plotted show the collected data, with offers indicated in blue and rejections in red. Larger points indicate a greater number of respondents giving that exact combination of answers.


The model, however, does not find any significant results. None of the p-values are below 0.05, though the values for both score and score:difficulty are close. McFadden’s r-squared for this model is 0.085, suggesting very poor explanatory power.


Discussion
The results are inconclusive given how close to 0.05 the p-values are, and that different methods of analysis would have returned a significant result. The explanatory power of even the inclusive model is very poor. However, the key takeaway here should be comparisons between the mean predictions of the successful and unsuccessful applicants, as well as the high variance of both groups. It would be unreasonable for an applicant to assume they had failed because their interview was difficult or because they felt it went poorly (and vice versa).

It is reasonable to conclude that an applicant’s prediction of their interview score has very little bearing on whether their application is likely to be successful or not. This analysis finds the predictive power of applicant estimates to be insignificant.


Appendix 1: The full data set, anonymised https://docs.google.com/spreadsheets/d/1yr2pI1AtXbvQroqEO2vHHm4el08Dd9Oz/edit?usp=sharing&ouid=110543563575637432283&rtpof=true&sd=true

(edited 1 year ago)
Reply 19
Bravo @Theloniouss ! 👏

Quick Reply

Latest

Trending

Trending