Hey there! Sign in to join this conversationNew here? Join for free
    Offline

    1
    ReputationRep:
    (Original post by Bruhh)
    When using the normal tables with a z value to 4+ significant figures do you guys take an average from the 'add' column since my values always seem to be 1 out in the 4th decimal place but we were never taught to take an average.
    you dont have to do that!!! the mark scheme always says correct answer to 3sf!!
    Offline

    10
    ReputationRep:
    (Original post by Gabriella98)
    you dont have to do that!!! the mark scheme always says correct answer to 3sf!!
    Ah it does indeed, so I'm perfectly fine doing it slightly less accurately.
    Offline

    7
    ReputationRep:
    Pls help. WHat definitions and worded answers do we need to know of for s2
    Offline

    7
    ReputationRep:
    (Original post by ComputerMaths97)
    Do probabilities to 4.d.p and other values to 3 or 4 significant figures depending on the value.

    For example 0.487548375384 should be 0.4875
    125.3258 should be 125.3
    83.2142352 should be 83.2 or 83.21 depending on which you prefer, both get the marks.

    Just never go beyond 4 s.f for solutions. However remember to never use rounded values in your calculations.
    What about contingency table contributions e.g 1.2345, 0.1234 or 1.234 and 0.1234 as examples for two contingencies?
    Offline

    14
    ReputationRep:
    hey can anyone give me an explanation on how to interpret contributions to the x^2 statistic? confused...
    Offline

    7
    ReputationRep:
    When we calculate contributions do we use the rounded expected value e.g 10.46 or the exact value 136/13
    • Thread Starter
    Offline

    1
    ReputationRep:
    (Original post by ABeingOnEarth)
    hey can anyone give me an explanation on how to interpret contributions to the x^2 statistic? confused...
    THe x^2 contribution is a way of quanitvativly diplaying the difference between the result observed and the result expected. Thus the larger the discrepency between the two the larger the contribution. If the sum of the contributions are larger than the x^2 critical value for your test then the result is significant.

    Furmulae is (O-E)^2 / E where O is observed value and E is expected.

    Do that for each cell and sum the total to find the x^2 contrivution value!

    Hope that helps mate and just ask if need any help with anything else
    • Thread Starter
    Offline

    1
    ReputationRep:
    (Original post by 11234)
    Pls help. WHat definitions and worded answers do we need to know of for s2
    Definition of Significance level usually comes up and understand advantages and disadvantages of high/low level.

    Know the circumstances in which specific tests are used. Eg Poisson for when p is small and N is large.

    PMCC when the scatter diagram shows a linear corolation and the data is a a bivariate normal distribution.

    Spearmens Rank when the data on a diagram is in a non linear pattern.

    Know what an indpenedatn/dependant variable is.
    Offline

    14
    ReputationRep:
    (Original post by hunter0raf42)
    THe x^2 contribution is a way of quanitvativly diplaying the difference between the result observed and the result expected. Thus the larger the discrepency between the two the larger the contribution. If the sum of the contributions are larger than the x^2 critical value for your test then the result is significant.

    Furmulae is (O-E)^2 / E where O is observed value and E is expected.

    Do that for each cell and sum the total to find the x^2 contrivution value!

    Hope that helps mate and just ask if need any help with anything else
    Thanks very much!!
    Offline

    19
    ReputationRep:
    (Original post by 11234)
    When we calculate contributions do we use the rounded expected value e.g 10.46 or the exact value 136/13
    the MS usually has the rounded decimal value. i usually put both so i can use the exact values in my calculations but still see the decimal values for if they ask you to comment on the contributions
    Offline

    9
    ReputationRep:
    It seems like there is always 1 unusual/more difficult question in each paper, and then then the rest is all the same.
    Offline

    16
    ReputationRep:
    S2 is really easy if you do past papers.

    If you don't, you're screwed.
    • Thread Starter
    Offline

    1
    ReputationRep:
    (Original post by ComputerMaths97)
    S2 is really easy if you do past papers.

    If you don't, you're screwed.
    Its such an anal unit. If you dont hit that buzzword you lose marks, adn the boundires are usually v high. Its v v annoyinh although not very hard to get your head around
    Offline

    3
    ReputationRep:
    This paper last year had unreal grade boundaries..

    Posted from TSR Mobile
    Offline

    3
    ReputationRep:
    (Original post by hunter0raf42)
    Its such an anal unit. If you dont hit that buzzword you lose marks, adn the boundires are usually v high. Its v v annoyinh although not very hard to get your head around
    Concept is okay to understand, but theres the twisty questions at end of some poisson and normal distrib questions. And those questions involving writing I can never get right.. explain this/comment on contribution/define this/why is this this.. goodness gracious.


    Posted from TSR Mobile
    Offline

    3
    ReputationRep:
    (Original post by ComputerMaths97)
    There are many approaches, but here's how I do these:

    Since we need to find a range (a<x<b) such that 90% (p=0.9) of the data points fit within that range, I like to go for a symmetrical range.

    Therefore the range from p=0.05 to p=0.95
    In this range, 90% of the data is here.

    Therefore, we have two things to solve:
    1) P( x < a) = 0.05 (we want to find the value of a for which only 5% of the data is under it)
    Solving this is pretty simple, simply being:
    P( z < [a-mean]/s.d) = 0.05
    And therefore [a-mean]/s.d = inverse-phi(0.05)
    = -inverse-phi(0.95)
    = -1.645
    Then you can just solve a, since you would've been given the mean and the standard deviation in the question

    2) P(x > b) = 0.05 (we want to find the value of b for which only 5% of the data is over it)
    Solve the same as normal.

    This will give two values (a and b) such that they collectively contain 90% (0.9) of the data. So say for example we get a = 135 and b=165
    It would be true that P(135 < x < 165) = 0.9
    Therefore 90% of the data is between 135 and 165

    Hope this helped!
    Brilliant explanation, thank you! Dont mind if this comes up now

    Posted from TSR Mobile
    Offline

    1
    ReputationRep:
    How did everyone find that?
    Offline

    19
    ReputationRep:
    Well.
    Offline

    1
    ReputationRep:
    Wasn't too bad, apart from MEI trying to confuse everyone with the choice of scale for the scatter graph 😂
    Offline

    7
    ReputationRep:
    Feel like Ive got the test stat for the last question wrong I got like 1/3 which was miles away from z critical
 
 
 
Poll
Do you agree with the PM's proposal to cut tuition fees for some courses?

The Student Room, Get Revising and Marked by Teachers are trading names of The Student Room Group Ltd.

Register Number: 04666380 (England and Wales), VAT No. 806 8067 22 Registered Office: International House, Queens Road, Brighton, BN1 3XE

Write a reply...
Reply
Hide
Reputation gems: You get these gems as you gain rep from other members for making good contributions and giving helpful advice.