The Student Room Group

Can someone help with this stats problem please? PSYCHOLOGY :(

I'm trying to work out which stats test is needed for this specific study, and I'm struggling a bit. So if somebody could point me in the right direction, or show me some good online educational sites, it'd be much appreciated!

Basically, there are three groups, two intervention conditions, and a control.
So group 1 is given intervention A
Group 2 is given intervention B
Group 3 is given no intervention

The researchers measure their attitudes before the intervention, and then using the same measurement, measure their attitudes after the intervention.


So there's a within-subjects component (attitudes before compared to attitudes after intervention), but there's also a between-subjects component (comparing the effects of Intervention A, B or no intervention on attitudes).


Would this be a Mixed-ANOVA?

If not, please give me some guidance! :frown:
I don't know if it is mixed ANOVA but it sounds like something that could be expressed using a Likert scale within a cross tabulation. I say that because you're comparing numerical variables yeah? (As in how someone relates to something on a scale of one to five?)
Reply 2
Original post by beautifulbigmacs
I don't know if it is mixed ANOVA but it sounds like something that could be expressed using a Likert scale within a cross tabulation. I say that because you're comparing numerical variables yeah? (As in how someone relates to something on a scale of one to five?)


Thanks for your reply :smile:

Oh God I don't remember encountering cross-tabulation before!

Yeah, it involves a Likert scale. How about if you just identified attitude change, so the difference in scale responses before and after the intervention, between each group. So then you'd just be comparing the mean attitude change between each group?

I feel like that wouldn't work though.
Between participants anova? (between groups)
Mixed is DV's both within and between groups.
My stats is rusty.

BTW Laerd is probably as good as I have found in past.

Edit: second thoughts, does look like mixed as group scores x before/after scores. Not totally sure tho
(edited 8 years ago)
Reply 4
Original post by hellodave5
Between participants anova? (between groups)
Mixed is DV's both within and between groups.
My stats is rusty.

BTW Laerd is probably as good as I have found in past.

Edit: second thoughts, does look like mixed as group scores x before/after scores. Not totally sure tho


Thanks for your reply!

I just had a look now and Laerd is pretty clear actually, seems like a good website. Thanks :smile:

I feel like it's a Mixed Anova too, but I'm just reading about it now, and the Laerd website says "the primary purpose of a mixed ANOVA is to understand if there is an interaction between the two factors on the dependent variable".

I'm not really interested in looking for an interaction, I just want to see if one intervention group leads to more attitude change than another.

God I hate stats...



If I just calculated attitude change scores from when participants completed the attitude scales the second time, would that simplify things a bit? Like it would then just be comparing one DV (attitude change) across the three intervention groups....
Original post by Twinpeaks
Thanks for your reply!

I just had a look now and Laerd is pretty clear actually, seems like a good website. Thanks :smile:

I feel like it's a Mixed Anova too, but I'm just reading about it now, and the Laerd website says "the primary purpose of a mixed ANOVA is to understand if there is an interaction between the two factors on the dependent variable".

I'm not really interested in looking for an interaction, I just want to see if one intervention group leads to more attitude change than another.

God I hate stats...



If I just calculated attitude change scores from when participants completed the attitude scales the second time, would that simplify things a bit? Like it would then just be comparing one DV (attitude change) across the three intervention groups....


Not sure, could be factorial?
http://www.ats.ucla.edu/stat/mult_pkg/whatstat/

I'm the worst person probably, having narrowly avoiding failure of my advanced stats module ^^. Hope helps though.
Original post by Twinpeaks


If I just calculated attitude change scores from when participants completed the attitude scales the second time, would that simplify things a bit? Like it would then just be comparing one DV (attitude change) across the three intervention groups....


I *think* this sounds about right. I would advise checking with your tutor to be certain though. Definitely sounds like you want to calculate the difference between a before and after number.
It is a between-participants ANOVA not a mixed ANOVA.
Reply 8
Original post by JamesManc
It is a between-participants ANOVA not a mixed ANOVA.


Are you sure? So do you think it's wise to obtain an "attitude change score" from the before and after scores, and then use that as the DV?
Reply 9
Original post by beautifulbigmacs
I *think* this sounds about right. I would advise checking with your tutor to be certain though. Definitely sounds like you want to calculate the difference between a before and after number.


Yeah, I will. Thanks for your advice :smile:
Original post by Twinpeaks
Are you sure? So do you think it's wise to obtain an "attitude change score" from the before and after scores, and then use that as the DV?


Yes I would use that as the DV in a between-participants 3x2 ANOVA
Reply 11
Original post by hellodave5
Not sure, could be factorial?
http://www.ats.ucla.edu/stat/mult_pkg/whatstat/

I'm the worst person probably, having narrowly avoiding failure of my advanced stats module ^^. Hope helps though.


I'll have a look into it, thanks!

Honestly, I've just returned from uni after a year's work placement, and I've retained absolutely nothing stats related. Bloody panic inducing.
I've not done proper stats in a long time, but here is my thought...

Your change score WITHIN the group should be seperate to the change score BETWEEN the groups.
Wouldn't a mixed ANOVA comparing all 6 data points confound the two things?

I'd say:
- Calculate your three change scores and run the ANOVA on those, then post-hoc t-tests to identify the location of any difference.
- Average the pre and post scores for all 3 variables and run 3 t-tests. This tells you if your change scores are significant in themselves.

Your change score on condition A (for example) may be signficantly different from B and the control, but the actual difference in pre and post may be insignificant.
Reply 13
Original post by _Sinnie_
I've not done proper stats in a long time, but here is my thought...

Your change score WITHIN the group should be seperate to the change score BETWEEN the groups.
Wouldn't a mixed ANOVA comparing all 6 data points confound the two things?

I'd say:
- Calculate your three change scores and run the ANOVA on those, then post-hoc t-tests to identify the location of any difference.
- Average the pre and post scores for all 3 variables and run 3 t-tests. This tells you if your change scores are significant in themselves.

Your change score on condition A (for example) may be signficantly different from B and the control, but the actual difference in pre and post may be insignificant.


Thanks for the reply!

I think I see what you're saying. Please correct me if I'm wrong!
To establish whether there has been a change in attitudes within each group, I'd do a Paired-T Test for each group. So looking at the within-subjects difference (pre and post).

If all three groups show a significant change in attitudes. Then I'd calculate the attitude change scores for each group, and then run a between-subjects ANOVA using the attitude change scores? Is that what you're saying?

Thanks again.
(edited 8 years ago)
Original post by Twinpeaks
Thanks for the reply!

I think I see what you're saying. Please correct me if I'm wrong!
To establish whether there has been a change in attitudes within each group, I'd do a Paired-T Test for each group. So looking at the within-subjects difference (pre and post).

If all three groups show a significant change in attitudes. Then I'd calculate the attitude change scores for each group, and then run a between-subjects ANOVA using the attitude change scores? Is that what you're saying?

Thanks again.


You've got two seperate questions to look at. The first question: is there a significant change in attitudes from before and after the intervention? This isn't very interesting- because you may expect a small change to be cause by small things such as boredom. However, the second question is: is the observed change in attitudes across the two measures the same or different across the three groups?

Your right that you can use a paired t-test to look at changes within each group... however this is unlikely to be of any interest. I don't think you've stated what the main hypothesis is, but i'd guess that the hypothesis is that the change in attitudes is greater in interventions A & B vs. the control. What some people do (which is totally wrong) would be to do two different paired t-tests, say showing that you get a significant increase in the two intervention groups (p<.05) but not in the control group (e.g. p = .30). However, whilst this shows that the effect is significant in the experimental but not in the control groups - it doesn't show the the effect is significantly stronger in the experimental versus the controls.

I wouldn't get bogged down too much in the terminology for different ANOVA tests - lots of statistical tests have different names but are really the same test (correlations, t-tests, regressions, ANOVA all belong to the same family of tests and are often interchangeable).

Original post by Twinpeaks

If all three groups show a significant change in attitudes. Then I'd calculate the attitude change scores for each group, and then run a between-subjects ANOVA using the attitude change scores? Is that what you're saying?

.


A mixed ANOVA does the same thing that your proposing here: by calculating the change scores for each individual your cancelling out the variance caused by people having different means to begin with, and by having the between-subjects effect you can see if the average change score is different across the groups.

I don't have time to check now - but what your proposing here may get you exactly the same results that you'd get by doing a repeat measures ANOVA. I know for a fact doing a repeat measures t-test is exactly the same as doing a single-group t-test on the change scores when your null hypothesis is that the average change score will be 0.
(edited 8 years ago)
Reply 15
Original post by iammichealjackson
You've got two seperate questions to look at. The first question: is there a significant change in attitudes from before and after the intervention? This isn't very interesting- because you may expect a small change to be cause by small things such as boredom. However, the second question is: is the observed change in attitudes across the two measures the same or different across the three groups?

Your right that you can use a paired t-test to look at changes within each group... however this is unlikely to be of any interest. I don't think you've stated what the main hypothesis is, but i'd guess that the hypothesis is that the change in attitudes is greater in interventions A & B vs. the control. What some people do (which is totally wrong) would be to do two different paired t-tests, say showing that you get a significant increase in the two intervention groups (p<.05) but not in the control group (e.g. p = .30). However, whilst this shows that the effect is significant in the experimental but not in the control groups - it doesn't show the the effect is significantly stronger in the experimental versus the controls.

I wouldn't get bogged down too much in the terminology for different ANOVA tests - lots of statistical tests have different names but are really the same test (correlations, t-tests, regressions, ANOVA all belong to the same family of tests and are often interchangeable).



A mixed ANOVA does the same thing that your proposing here: by calculating the change scores for each individual your cancelling out the variance caused by people having different means to begin with, and by having the between-subjects effect you can see if the average change score is different across the groups.

I don't have time to check now - but what your proposing here may get you exactly the same results that you'd get by doing a repeat measures ANOVA. I know for a fact doing a repeat measures t-test is exactly the same as doing a single-group t-test on the change scores when your null hypothesis is that the average change score will be 0.


Thanks for the reply!

My hypothesis is that there will be a greater change in attitudes in group A compared to group B, and the control group. I was just thinking that doing a Paired-T test may be necessary because if my result for group B for example, comes up as non-significant, and group A's attitude change score is significant, then I can this way accept the hypothesis that there will be a greater attitude change in Group A? But if both come up as significant, then I'd need to do the ANOVA? Would that be pointless and it would be better just to stick with the ANOVA? It does seem like a lot of work!

Okay I'll have a look into Repeated Measures ANOVA, thanks :smile:

Edit: I've just re-read your post, and see that you think using a paired-T test in that way is wrong. So I won't use a T-test at all then. I'm still not entirely sure of the reasoning behind why a T-test used in this way is wrong though, if you don't mind, could you explain that a bit please? I think I get what you're saying though, even though the T-test for one group proved significant, and the T-test for the control was non-significant, the difference between that group and the control group may still not be significant?

Just to check, when I described the between-subjects ANOVA earlier you don't seem to think that's wrong, just that a repeated measured ANOVA will be more efficient?
(edited 8 years ago)
Original post by Twinpeaks
Thanks for the reply!

My hypothesis is that there will be a greater change in attitudes in group A compared to group B, and the control group. I was just thinking that doing a Paired-T test may be necessary because if my result for group B for example, comes up as non-significant, and group A's attitude change score is significant, then I can this way accept the hypothesis that there will be a greater attitude change in Group A? But if both come up as significant, then I'd need to do the ANOVA? Would that be pointless and it would be better just to stick with the ANOVA? It does seem like a lot of work!

Okay I'll have a look into Repeated Measures ANOVA, thanks :smile:


Yes it would be pointless to do that. Just because A is significant and B is not significant doesn't mean that the difference between the two groups is significant (see http://www.stat.columbia.edu/~gelman/research/published/signif4.pdf ). A could be significant (p=.03) and B could be just a tiny bit "non-significant" (p=.050001) it doesn't mean that there is any real difference between the groups.
Original post by Twinpeaks
Thanks for the reply!

My hypothesis is that there will be a greater change in attitudes in group A compared to group B, and the control group. I was just thinking that doing a Paired-T test may be necessary because if my result for group B for example, comes up as non-significant, and group A's attitude change score is significant, then I can this way accept the hypothesis that there will be a greater attitude change in Group A? But if both come up as significant, then I'd need to do the ANOVA? Would that be pointless and it would be better just to stick with the ANOVA? It does seem like a lot of work!

Okay I'll have a look into Repeated Measures ANOVA, thanks :smile:

Edit: I've just re-read your post, and see that you think using a paired-T test in that way is wrong. So I won't use a T-test at all then. I'm still not entirely sure of the reasoning behind why a T-test used in this way is wrong though, if you don't mind, could you explain that a bit please? I think I get what you're saying though, even though the T-test for one group proved significant, and the T-test for the control was non-significant, the difference between that group and the control group may still not be significant?

Just to check, when I described the between-subjects ANOVA earlier you don't seem to think that's wrong, just that a repeated measured ANOVA will be more efficient?

ANOVA notes.png
Neither method is more efficient, they get you exactly the same results (see pic).

I analysed a made-up dataset with three groups and each person doing two tests. the last column in the change score.

The top output is the main results table from a mixed-effect model. It shows that factor1 (which is the repeated measures main effect) shows a marginally significant difference (p=.051) between the first and second test. However the interaction effect (factor1*group) is significant (f=28.7, p=.000) showing that the difference between the first and second test is not the same across the three groups (i.e. an interaction effect).

With the simpler model (second output below) i put the change score (4th column) of each individual in an ANOVA model with group as the dependent between-subjects variable. This showed that the mean change score was different across the groups -- crucially we get the same F value (F=28.7) and P value (p=.000) for the "interaction" effect in the mixed-effect model.

I'd just calculate the change score and do a between-subjects ANOVA as its easier to interpret, you don't need to look up what an interaction effect is or worry about the between-subjects output (its not any less statistically correct as its the same too).
(edited 8 years ago)
Reply 18
Original post by iammichealjackson
ANOVA notes.png
Neither method is more efficient, they get you exactly the same results (see pic).

I analysed a made-up dataset with three groups and each person doing two tests. the last column in the change score.

The top output is the main results table from a mixed-effect model. It shows that factor1 (which is the repeated measures main effect) shows a marginally significant difference (p=.051) between the first and second test. However the interaction effect (factor1*group) is significant (f=28.7, p=.000) showing that the difference between the first and second test is not the same across the three groups (i.e. an interaction effect).

With the simpler model (second output below) i put the change score (4th column) of each individual in an ANOVA model with group as the dependent between-subjects variable. This showed that the mean change score was different across the groups -- crucially we get the same F value (F=28.7) and P value (p=.000) for the "interaction" effect in the mixed-effect model.

I'd just calculate the change score and do a between-subjects ANOVA as its easier to interpret, you don't need to look up what an interaction effect is or worry about the between-subjects output (its not any less statistically correct as its the same too).


Thank you so much. I can't begin to describe how much of a help that is, you've made it so much clearer!

I feel so relieved now :smile:
(edited 8 years ago)

Quick Reply

Latest

Trending

Trending