In a one-way ANOVA, why does reducing the standard deviation of subjects to zero decrease the F-Value and make the test insignificant?
- Thread Starter
- 29-01-2010 11:59
- 29-01-2010 12:08
Because when the sum of the squares is smaller, the alternative hypothesis tends to be false.
- 30-01-2010 21:34
F = MS(model)/MS(residual)
where MS = SS/df for each type of class variability. If variability = 0, then both types of MS = 0.
e.g., if the sum of squares for your model is 0 for the 3 group scenario, then df = N-1 = 2.
Therefore 0/2 = 0.
Similarly, if the df for the residual = 12, then for no variability, SS = 0 and MS = 0/12 = 0.
Thus, F = 0/0 = 0
In essence, if SD = 0, then all the data is the same score - of course they can't be derived from different populations. It's the perfect nil hypothesis.