The Student Room Group

Non-significant ANOVA but significant T-tests?

Hi! I'm testing the effects of a negative or neutral mood on three tests of memory.
So I did a 3x2 ANOVA.

There was no significant effect of memory, mood, or an interaction.
My supervisor suggested conducting one without one of the memory tests because accuracy was nearly at 100%.
So I did a 2x2 ANOVA - again there was not an effect of memory test, mood or an interaction.

HOWEVER

I also did separate T-tests:
- Dynamic test (neutral vs negative mood) - Non-significant.
- Static test (neutral vs negative mood) - Significant (Ps got better with the negative mood, strangely)
- Neutral mood (dynamic vs static test) - Non-significant
- Negative mood (dynamic vs static test) - Significant (Ps were significantly better at the static task under a negative mood induction compared to a dynamic task).

This makes it seem like there is an interaction effect, like I hypothesised.
But, if the ANOVA was non-significant, should I really be getting significant results on T-tests? Why would this be the case?

My supervisor has really messed me around with my results, and now she's 'ill' and my coursework is due next week. I can't obviously start the discussion if my results aren't done, but I don't know what to do. How can I qualify doing 4 separate t-tests when an ANOVA is the correct test to use?
The problem with conducting multiple t-tests is that you are increasing the probability of a type I error due to multiple comparisons. The more tests you run, the more likely you are to get a false positive result.

Explains it quite nicely here- https://statistics.laerd.com/statistical-guides/one-way-anova-statistical-guide-2.php
(edited 7 years ago)
I don't really understand how you've inferred an interaction from that result, doesn't that t-test suggest a main effect? 🤔

Also, I think it's much more wise to stick with the test you feel more secure with, and to have a non-significant result. You won't get marked down for a n/s result, but you'll get penalised for using an incorrect test! So if I were you, stick with the ANOVA results and start on your discussion. :smile:
(edited 7 years ago)
Original post by Mojojojo
Hi! I'm testing the effects of a negative or neutral mood on three tests of memory.
So I did a 3x2 ANOVA.

There was no significant effect of memory, mood, or an interaction.
My supervisor suggested conducting one without one of the memory tests because accuracy was nearly at 100%.
So I did a 2x2 ANOVA - again there was not an effect of memory test, mood or an interaction.

HOWEVER

I also did separate T-tests:
- Dynamic test (neutral vs negative mood) - Non-significant.
- Static test (neutral vs negative mood) - Significant (Ps got better with the negative mood, strangely)
- Neutral mood (dynamic vs static test) - Non-significant
- Negative mood (dynamic vs static test) - Significant (Ps were significantly better at the static task under a negative mood induction compared to a dynamic task).

This makes it seem like there is an interaction effect, like I hypothesised.
But, if the ANOVA was non-significant, should I really be getting significant results on T-tests? Why would this be the case?

My supervisor has really messed me around with my results, and now she's 'ill' and my coursework is due next week. I can't obviously start the discussion if my results aren't done, but I don't know what to do. How can I qualify doing 4 separate t-tests when an ANOVA is the correct test to use?


You haven't described your analysis very well so i can't really comment properly. Just because you've found a non-signiciant effect in one group but a signifiacnt effect in the other group, it doesn't mean that there is an interaction. The effect in group A could be p=.04 and in group B p=.06, but it doesn't really mean much because the actual difference between a significant and a non-significant result isn't in itself statistically significant (see this paper)
Reply 4
Original post by iammichealjackson
You haven't described your analysis very well so i can't really comment properly. Just because you've found a non-signiciant effect in one group but a signifiacnt effect in the other group, it doesn't mean that there is an interaction. The effect in group A could be p=.04 and in group B p=.06, but it doesn't really mean much because the actual difference between a significant and a non-significant result isn't in itself statistically significant (see this paper)


Ok thank you! I was just doing what my supervisor said with the t-tests but knew something wasn't right.
(edited 7 years ago)
Reply 5
Original post by Twinpeaks
I don't really understand how you've inferred an interaction from that result, doesn't that t-test suggest a main effect? 🤔

Also, I think it's much more wise to stick with the test you feel more secure with, and to have a non-significant result. You won't get marked down for a n/s result, but you'll get penalised for using an incorrect test! So if I were you, stick with the ANOVA results and start on your discussion. :smile:


I inferred an interaction because the t-tests showed no difference between the two groups in a neutral mood state, but showed that a negative mood had a differential effect depending on the test type - static got better, whereas dynamic stayed the same?

I'm just going to stick to the ANOVA now. I can at least say that an effect was emerging but the study wasn't powered enough for it to fully materialise or something, so I can still talk about what should have happened.
You do not do separate t-tests if your ANOVA was non-significant. Report the ANOVA and say the reasons you believe it was non-significant.

Quick Reply

Latest

Trending

Trending