# Simple slopes analysis after finding an interaction

Watch this thread
Announcements

Page 1 of 1

Go to first unread

Skip to page:

babygirl110

Badges:
15

Rep:

?
You'll earn badges for being active around the site. Rep gems come when your posts are rated by other community members.
#1

Dear All,

Would anyone be able to advise me on how I can go about doing a simple slopes analysis after finding a moderation between two continuous variables. So the relationship between two variables varies with the presence of another variable.

Many thanks

Oh. I'm using SPSS

Would anyone be able to advise me on how I can go about doing a simple slopes analysis after finding a moderation between two continuous variables. So the relationship between two variables varies with the presence of another variable.

Many thanks

Oh. I'm using SPSS

0

reply

Psych!

Badges:
0

Rep:

?
You'll earn badges for being active around the site. Rep gems come when your posts are rated by other community members.
#2

Report

#2

Firstly, SPSS is a bit crap for this.

You effectively want to understand how X and Y relate when accounting for M (just pick one of the predictors to be M).

If M is continuous (which you say it is) then you need to use centering (z scores) of the variables and then regress X on Y with +1M and -1M (simply add 1 or subtract 1 from centered M - so +/-1SD really). The question you are asking is does the regression weight for each regression significantly differ from 0. You can also use the 0M (i.e., the untransformed centered variable) and compare three levels of the moderator (two can do the job, though).

In sum, you want to assess the conditional XY effects:

Y = b0 + b1X when M = -1

Y = b0 + b1X when M = 0

Y = b0 + b1X when M = +1

It looks fairly simple, but requires some crunching. You need to center all variables (X, Y, and M), then transform M and calculate the cross-product of the X*M interaction. Run the regression using X, M, and X*M as predictors (with Y as the dependent) for each of the conditional regressions. Specifically check the unstandardised (

Easy enough if you can use SPSS syntax:

For M = +1 with centered Z-values;

COMPUTE ZMHigh = ZM+1

COMPUTE ZX.ZMHigh = ZX*ZMhigh

REGRESSION

/DEPENDENT = Y

/METHOD = ENTER ZX ZMHigh

/METHOD = ENTER ZX.ZMHigh

and do the similar for M-1 (just change the first compute line and label appropriately 'ZMlow'). And you can use something like Excel to plot the slopes.

Hope that helps. Might be worth hunting down something on the net (or I think Howell has an outline of this) - harder to explain in a forum post.

You effectively want to understand how X and Y relate when accounting for M (just pick one of the predictors to be M).

If M is continuous (which you say it is) then you need to use centering (z scores) of the variables and then regress X on Y with +1M and -1M (simply add 1 or subtract 1 from centered M - so +/-1SD really). The question you are asking is does the regression weight for each regression significantly differ from 0. You can also use the 0M (i.e., the untransformed centered variable) and compare three levels of the moderator (two can do the job, though).

In sum, you want to assess the conditional XY effects:

Y = b0 + b1X when M = -1

Y = b0 + b1X when M = 0

Y = b0 + b1X when M = +1

It looks fairly simple, but requires some crunching. You need to center all variables (X, Y, and M), then transform M and calculate the cross-product of the X*M interaction. Run the regression using X, M, and X*M as predictors (with Y as the dependent) for each of the conditional regressions. Specifically check the unstandardised (

*not*standardised) Beta values for the X predictor (usual t-test of Beta).Easy enough if you can use SPSS syntax:

For M = +1 with centered Z-values;

COMPUTE ZMHigh = ZM+1

COMPUTE ZX.ZMHigh = ZX*ZMhigh

REGRESSION

/DEPENDENT = Y

/METHOD = ENTER ZX ZMHigh

/METHOD = ENTER ZX.ZMHigh

and do the similar for M-1 (just change the first compute line and label appropriately 'ZMlow'). And you can use something like Excel to plot the slopes.

Hope that helps. Might be worth hunting down something on the net (or I think Howell has an outline of this) - harder to explain in a forum post.

0

reply

babygirl110

Badges:
15

Rep:

?
You'll earn badges for being active around the site. Rep gems come when your posts are rated by other community members.
#3

(Original post by

Firstly, SPSS is a bit crap for this.

You effectively want to understand how X and Y relate when accounting for M (just pick one of the predictors to be M).

If M is continuous (which you say it is) then you need to use centering (z scores) of the variables and then regress X on Y with +1M and -1M (simply add 1 or subtract 1 from centered M - so +/-1SD really). The question you are asking is does the regression weight for each regression significantly differ from 0. You can also use the 0M (i.e., the untransformed centered variable) and compare three levels of the moderator (two can do the job, though).

In sum, you want to assess the conditional XY effects:

Y = b0 + b1X when M = -1

Y = b0 + b1X when M = 0

Y = b0 + b1X when M = +1

It looks fairly simple, but requires some crunching. You need to center all variables (X, Y, and M), then transform M and calculate the cross-product of the X*M interaction. Run the regression using X, M, and X*M as predictors (with Y as the dependent) for each of the conditional regressions. Specifically check the unstandardised (

Easy enough if you can use SPSS syntax:

For M = +1 with centered Z-values;

COMPUTE ZMHigh = ZM+1

COMPUTE ZX.ZMHigh = ZX*ZMhigh

REGRESSION

/DEPENDENT = Y

/METHOD = ENTER ZX ZMHigh

/METHOD = ENTER ZX.ZMHigh

and do the similar for M-1 (just change the first compute line and label appropriately 'ZMlow'). And you can use something like Excel to plot the slopes.

Hope that helps. Might be worth hunting down something on the net (or I think Howell has an outline of this) - harder to explain in a forum post.

**Psych!**)Firstly, SPSS is a bit crap for this.

You effectively want to understand how X and Y relate when accounting for M (just pick one of the predictors to be M).

If M is continuous (which you say it is) then you need to use centering (z scores) of the variables and then regress X on Y with +1M and -1M (simply add 1 or subtract 1 from centered M - so +/-1SD really). The question you are asking is does the regression weight for each regression significantly differ from 0. You can also use the 0M (i.e., the untransformed centered variable) and compare three levels of the moderator (two can do the job, though).

In sum, you want to assess the conditional XY effects:

Y = b0 + b1X when M = -1

Y = b0 + b1X when M = 0

Y = b0 + b1X when M = +1

It looks fairly simple, but requires some crunching. You need to center all variables (X, Y, and M), then transform M and calculate the cross-product of the X*M interaction. Run the regression using X, M, and X*M as predictors (with Y as the dependent) for each of the conditional regressions. Specifically check the unstandardised (

*not*standardised) Beta values for the X predictor (usual t-test of Beta).Easy enough if you can use SPSS syntax:

For M = +1 with centered Z-values;

COMPUTE ZMHigh = ZM+1

COMPUTE ZX.ZMHigh = ZX*ZMhigh

REGRESSION

/DEPENDENT = Y

/METHOD = ENTER ZX ZMHigh

/METHOD = ENTER ZX.ZMHigh

and do the similar for M-1 (just change the first compute line and label appropriately 'ZMlow'). And you can use something like Excel to plot the slopes.

Hope that helps. Might be worth hunting down something on the net (or I think Howell has an outline of this) - harder to explain in a forum post.

Okay done that regression interaction bit. That was fairly simple, it's the simple slopes on excel I just don't understand .

I've downloaded loads of info about it and my lecturer has sent me stuff but it's all so overwhelming. Maybe I need to make more effort to try and understand it

0

reply

Psych!

Badges:
0

Rep:

?
You'll earn badges for being active around the site. Rep gems come when your posts are rated by other community members.
#4

Report

#4

Just use the regression output (the Beta values) and find the Y values for +1, 0, -1 of X.

Then plot. You can simply do this by hand if you want (or use excel to calculate).

Then plot. You can simply do this by hand if you want (or use excel to calculate).

0

reply

babygirl110

Badges:
15

Rep:

?
You'll earn badges for being active around the site. Rep gems come when your posts are rated by other community members.
#5

(Original post by

Just use the regression output (the Beta values) and find the Y values for +1, 0, -1 of X.

Then plot. You can simply do this by hand if you want (or use excel to calculate).

**Psych!**)Just use the regression output (the Beta values) and find the Y values for +1, 0, -1 of X.

Then plot. You can simply do this by hand if you want (or use excel to calculate).

So what I understand from what you are saying is that I need to substitute those Beta values into the regression equation and try to work it out?

Okay, hope I get it by the end of today . Thanks!

0

reply

Psych!

Badges:
0

Rep:

?
You'll earn badges for being active around the site. Rep gems come when your posts are rated by other community members.
#6

Report

#6

You need to go beyond the original regression, though (not sure if you mean the conditional or not). The initial regression showing the X*M interaction just informs you of the moderator relationship. So you would have performed a regression with the untransformed data using X, M, and X*M on the dependent Y - where the X*M interaction was significant. The next step is to tease the moderator relationship apart.

If you follow what I outlined above, you will now have two conditional regressions which alter M (high or low). Use the regression equation for each of these to produce values of Y using X = -1, 0, +1. This will give you three values of Y for each of the regressions.

Then plot of X vs. Y for each moderator conditional (gives two lines). This is just the visual aspect - the conditional regression output will be more important for determining whether high/low M significantly alters the relationship between X and Y.

If you follow what I outlined above, you will now have two conditional regressions which alter M (high or low). Use the regression equation for each of these to produce values of Y using X = -1, 0, +1. This will give you three values of Y for each of the regressions.

Then plot of X vs. Y for each moderator conditional (gives two lines). This is just the visual aspect - the conditional regression output will be more important for determining whether high/low M significantly alters the relationship between X and Y.

0

reply

Badges:
15

Rep:

?
You'll earn badges for being active around the site. Rep gems come when your posts are rated by other community members.
#7

(Original post by

You need to go beyond the original regression, though (not sure if you mean the conditional or not). The initial regression showing the X*M interaction just informs you of the moderator relationship. So you would have performed a regression with the untransformed data using X, M, and X*M on the dependent Y - where the X*M interaction was significant. The next step is to tease the moderator relationship apart.

If you follow what I outlined above, you will now have two conditional regressions which alter M (high or low). Use the regression equation for each of these to produce values of Y using X = -1, 0, +1. This will give you three values of Y for each of the regressions.

Then plot of X vs. Y for each moderator conditional (gives two lines). This is just the visual aspect - the conditional regression output will be more important for determining whether high/low M significantly alters the relationship between X and Y.

**Psych!**)You need to go beyond the original regression, though (not sure if you mean the conditional or not). The initial regression showing the X*M interaction just informs you of the moderator relationship. So you would have performed a regression with the untransformed data using X, M, and X*M on the dependent Y - where the X*M interaction was significant. The next step is to tease the moderator relationship apart.

If you follow what I outlined above, you will now have two conditional regressions which alter M (high or low). Use the regression equation for each of these to produce values of Y using X = -1, 0, +1. This will give you three values of Y for each of the regressions.

Then plot of X vs. Y for each moderator conditional (gives two lines). This is just the visual aspect - the conditional regression output will be more important for determining whether high/low M significantly alters the relationship between X and Y.

okay thanks, I'll try that.

0

reply

Badges:
0

Rep:

?
You'll earn badges for being active around the site. Rep gems come when your posts are rated by other community members.
#8

Report

#8

lol, my intuition gland suggests I should perhaps explain a bit more.

When you did the original regression, it basically just says 'there is a significant interaction between X*M (or A*B)'. What you want to know is how variations in the moderator (M) alter the relationship between X and Y. Because M is continuous you need to vary between low, mid, and high (or just low and high) values of M.

The interaction is saying that the relationship between X and Y changes with the value of M. So by using the conditional regression approach you force the variation across M - so we'll take high and low M and see how the X vs. Y relationship changes. For example, we might find for high values of M the relationship is significant, but it is not significant at low values of M.

You need to center the variables to control for little pinickity things like multicollinearity. So transform all variables to Z-scores.

Then by performing two conditional regressions which use values of +1M and -1M you are examining the effect of varying M on the XY relationship across a decent range (+1SD to -1SD). This should tease apart the interaction, and you can assess the XY relationship from the unstandardised X beta-weights for each of the conditional regressions along with a visual of the slopes. This is simple slopes analysis (i.e., compare the XY slopes at values of M)

You can sort of see this as a method as a way of turning a continuous variable into a categorical variable. You could also use median-split, but the method sucks for various reasons.

When you did the original regression, it basically just says 'there is a significant interaction between X*M (or A*B)'. What you want to know is how variations in the moderator (M) alter the relationship between X and Y. Because M is continuous you need to vary between low, mid, and high (or just low and high) values of M.

The interaction is saying that the relationship between X and Y changes with the value of M. So by using the conditional regression approach you force the variation across M - so we'll take high and low M and see how the X vs. Y relationship changes. For example, we might find for high values of M the relationship is significant, but it is not significant at low values of M.

You need to center the variables to control for little pinickity things like multicollinearity. So transform all variables to Z-scores.

Then by performing two conditional regressions which use values of +1M and -1M you are examining the effect of varying M on the XY relationship across a decent range (+1SD to -1SD). This should tease apart the interaction, and you can assess the XY relationship from the unstandardised X beta-weights for each of the conditional regressions along with a visual of the slopes. This is simple slopes analysis (i.e., compare the XY slopes at values of M)

You can sort of see this as a method as a way of turning a continuous variable into a categorical variable. You could also use median-split, but the method sucks for various reasons.

0

reply

Badges:
15

Rep:

?
You'll earn badges for being active around the site. Rep gems come when your posts are rated by other community members.
#9

Okay great, I'll try to follow that. Hope I get it right with trial and error. Mentally I had this image of turning one variable into a categorical one because I remember when I did an ANOVA it was fairly easy as one of the variables was already categorical but , this is probably the hardest analysis I have encountered, because SPSS can't do it for me .

0

reply

Badges:
15

Rep:

?
You'll earn badges for being active around the site. Rep gems come when your posts are rated by other community members.
#10

(Original post by

lol, my intuition gland suggests I should perhaps explain a bit more.

When you did the original regression, it basically just says 'there is a significant interaction between X*M (or A*B)'. What you want to know is how variations in the moderator (M) alter the relationship between X and Y. Because M is continuous you need to vary between low, mid, and high (or just low and high) values of M.

The interaction is saying that the relationship between X and Y changes with the value of M. So by using the conditional regression approach you force the variation across M - so we'll take high and low M and see how the X vs. Y relationship changes. For example, we might find for high values of M the relationship is significant, but it is not significant at low values of M.

You need to center the variables to control for little pinickity things like multicollinearity. So transform all variables to Z-scores.

Then by performing two conditional regressions which use values of +1M and -1M you are examining the effect of varying M on the XY relationship across a decent range (+1SD to -1SD). This should tease apart the interaction, and you can assess the XY relationship from the unstandardised X beta-weights for each of the conditional regressions along with a visual of the slopes. This is simple slopes analysis (i.e., compare the XY slopes at values of M)

You can sort of see this as a method as a way of turning a continuous variable into a categorical variable. You could also use median-split, but the method sucks for various reasons.

**Psych!**)lol, my intuition gland suggests I should perhaps explain a bit more.

When you did the original regression, it basically just says 'there is a significant interaction between X*M (or A*B)'. What you want to know is how variations in the moderator (M) alter the relationship between X and Y. Because M is continuous you need to vary between low, mid, and high (or just low and high) values of M.

The interaction is saying that the relationship between X and Y changes with the value of M. So by using the conditional regression approach you force the variation across M - so we'll take high and low M and see how the X vs. Y relationship changes. For example, we might find for high values of M the relationship is significant, but it is not significant at low values of M.

You need to center the variables to control for little pinickity things like multicollinearity. So transform all variables to Z-scores.

Then by performing two conditional regressions which use values of +1M and -1M you are examining the effect of varying M on the XY relationship across a decent range (+1SD to -1SD). This should tease apart the interaction, and you can assess the XY relationship from the unstandardised X beta-weights for each of the conditional regressions along with a visual of the slopes. This is simple slopes analysis (i.e., compare the XY slopes at values of M)

You can sort of see this as a method as a way of turning a continuous variable into a categorical variable. You could also use median-split, but the method sucks for various reasons.

I have attached it to make what I'm trying to say easier

0

reply

Badges:
15

Rep:

?
You'll earn badges for being active around the site. Rep gems come when your posts are rated by other community members.
#11

Oh i just read that it means my interaction isn't significant . What does the ANOVA tell me then?

0

reply

Badges:
0

Rep:

?
You'll earn badges for being active around the site. Rep gems come when your posts are rated by other community members.
#12

Report

#12

(Original post by

Oh i just read that it means my interaction isn't significant . What does the ANOVA tell me then?

**babygirl110**)Oh i just read that it means my interaction isn't significant . What does the ANOVA tell me then?

As for the output etc: I'm not too sure what you've done there.

1. Start off by performing the basic regression analysis with the data which includes the interaction term.

2. if you find the interaction is significant, then perform the simple slope analysis on the centered data (z-scores) at high and low (+1 and -1) values of the moderator.

However, looking at the output: are age

*and*gender your predictors? Are

*both*continuous?

0

reply

Badges:
15

Rep:

?
You'll earn badges for being active around the site. Rep gems come when your posts are rated by other community members.
#13

(Original post by

For a regression the ANOVA tells you whether the regression model explains the variance better than the basic means model. Just says whether the model is good (does it explain the variance in Y well).

As for the output etc: I'm not too sure what you've done there.

1. Start off by performing the basic regression analysis with the data which includes the interaction term.

2. if you find the interaction is significant, then perform the simple slope analysis on the centered data (z-scores) at high and low (+1 and -1) values of the moderator.

However, looking at the output: are age

**Psych!**)For a regression the ANOVA tells you whether the regression model explains the variance better than the basic means model. Just says whether the model is good (does it explain the variance in Y well).

As for the output etc: I'm not too sure what you've done there.

1. Start off by performing the basic regression analysis with the data which includes the interaction term.

2. if you find the interaction is significant, then perform the simple slope analysis on the centered data (z-scores) at high and low (+1 and -1) values of the moderator.

However, looking at the output: are age

*and*gender your predictors? Are*both*continuous?I then entered my two IV's into the second model and they were both significant predictors. I finally entered the interaction term into the 3rd model and the ANOVA is significant but not the R squared change.

So you do advise entering the interaction straight after age and gender without entering the 2 IV's first?

0

reply

Badges:
0

Rep:

?
You'll earn badges for being active around the site. Rep gems come when your posts are rated by other community members.
#14

Report

#14

Okie doke. But you need to account for them as well in the simple slope analysis.

Transform to z-score for age (which I assume is continuous) and effect code gender (e.g., male = +1, female = -1 if the groups are

Then just perform the two conditional regressions as shown earlier.

Transform to z-score for age (which I assume is continuous) and effect code gender (e.g., male = +1, female = -1 if the groups are

*equal*size). This will enter them as covariates in the conditional regressions.Then just perform the two conditional regressions as shown earlier.

0

reply

Badges:
15

Rep:

?
You'll earn badges for being active around the site. Rep gems come when your posts are rated by other community members.
#15

(Original post by

Okie doke. But you need to account for them as well in the simple slope analysis.

Transform to z-score for age (which I assume is continuous) and effect code gender (e.g., male = +1, female = -1 if the groups are

Then just perform the two conditional regressions as shown earlier.

**Psych!**)Okie doke. But you need to account for them as well in the simple slope analysis.

Transform to z-score for age (which I assume is continuous) and effect code gender (e.g., male = +1, female = -1 if the groups are

*equal*size). This will enter them as covariates in the conditional regressions.Then just perform the two conditional regressions as shown earlier.

0

reply

aw305

Badges:
0

Rep:

?
You'll earn badges for being active around the site. Rep gems come when your posts are rated by other community members.
#16

Report

#16

**Psych!**)

Firstly, SPSS is a bit crap for this.

You effectively want to understand how X and Y relate when accounting for M (just pick one of the predictors to be M).

If M is continuous (which you say it is) then you need to use centering (z scores) of the variables and then regress X on Y with +1M and -1M (simply add 1 or subtract 1 from centered M - so +/-1SD really). The question you are asking is does the regression weight for each regression significantly differ from 0. You can also use the 0M (i.e., the untransformed centered variable) and compare three levels of the moderator (two can do the job, though).

In sum, you want to assess the conditional XY effects:

Y = b0 + b1X when M = -1

Y = b0 + b1X when M = 0

Y = b0 + b1X when M = +1

It looks fairly simple, but requires some crunching. You need to center all variables (X, Y, and M), then transform M and calculate the cross-product of the X*M interaction. Run the regression using X, M, and X*M as predictors (with Y as the dependent) for each of the conditional regressions. Specifically check the unstandardised (

*not*standardised) Beta values for the X predictor (usual t-test of Beta).

Easy enough if you can use SPSS syntax:

For M = +1 with centered Z-values;

COMPUTE ZMHigh = ZM+1

COMPUTE ZX.ZMHigh = ZX*ZMhigh

REGRESSION

/DEPENDENT = Y

/METHOD = ENTER ZX ZMHigh

/METHOD = ENTER ZX.ZMHigh

and do the similar for M-1 (just change the first compute line and label appropriately 'ZMlow'). And you can use something like Excel to plot the slopes.

Hope that helps. Might be worth hunting down something on the net (or I think Howell has an outline of this) - harder to explain in a forum post.

Hi,

I am a also interested in plotting simple slopes using this method.

I have transformed my moderator (M+1; M-1) and created the relevant interaction terms (X*M+1; X*M-1), and run these conditional regressions in SPSS. However, how can I use the output from SPSS to plot the simple slopes in Excel? May be worth pointing out that my moderator is continuous (level of automaticity) and my independent variable X is categorical (condition assignment). Is is just the Beta value for the X that I will be concerned with? What exactly does this changing Beta value (and related signficance) across levels of moderator tell me?

Many thanks for any help

0

reply

RenlotSoph

Badges:
0

Rep:

?
You'll earn badges for being active around the site. Rep gems come when your posts are rated by other community members.
#17

Report

#17

Start off with your regression analysis, if its significant use a simple slope analysis.

0

reply

popi_ts

Badges:
0

Rep:

?
You'll earn badges for being active around the site. Rep gems come when your posts are rated by other community members.
#18

Report

#18

Hallo,

I need some help understanding the principle behind the regressions at +/- 1SD of the moderator following a singificant interaction. I understand that I am testing the effect of the independent variable X on the dependent Y, when the moderator has a specific value. I am having some difficulty on how this is calculated.

By making a new variable of +1SD of the moderator, I am actually adding a constant (1 sd) to every participant's score and entering this new variable as predictor in my model instead of the original moderating variable.

I just had a consult on statistics and I had some trouble explaining why this makes sense. According to my colleague, when you want to test the relation XY at values of M, you need to keep the M constant to see what happens in the X coefficient and not to produce a new variable (M +1sd) for each participant.

I would really appreciate if someone could help me out on how to explain this (and mostly importantly how understand this difference).

Popi

I need some help understanding the principle behind the regressions at +/- 1SD of the moderator following a singificant interaction. I understand that I am testing the effect of the independent variable X on the dependent Y, when the moderator has a specific value. I am having some difficulty on how this is calculated.

By making a new variable of +1SD of the moderator, I am actually adding a constant (1 sd) to every participant's score and entering this new variable as predictor in my model instead of the original moderating variable.

I just had a consult on statistics and I had some trouble explaining why this makes sense. According to my colleague, when you want to test the relation XY at values of M, you need to keep the M constant to see what happens in the X coefficient and not to produce a new variable (M +1sd) for each participant.

I would really appreciate if someone could help me out on how to explain this (and mostly importantly how understand this difference).

Popi

0

reply

ScaredofThesis

Badges:
0

Rep:

?
You'll earn badges for being active around the site. Rep gems come when your posts are rated by other community members.
#19

Report

#19

**Psych!**)

Firstly, SPSS is a bit crap for this.

You effectively want to understand how X and Y relate when accounting for M (just pick one of the predictors to be M).

If M is continuous (which you say it is) then you need to use centering (z scores) of the variables and then regress X on Y with +1M and -1M (simply add 1 or subtract 1 from centered M - so +/-1SD really). The question you are asking is does the regression weight for each regression significantly differ from 0. You can also use the 0M (i.e., the untransformed centered variable) and compare three levels of the moderator (two can do the job, though).

In sum, you want to assess the conditional XY effects:

Y = b0 + b1X when M = -1

Y = b0 + b1X when M = 0

Y = b0 + b1X when M = +1

It looks fairly simple, but requires some crunching. You need to center all variables (X, Y, and M), then transform M and calculate the cross-product of the X*M interaction. Run the regression using X, M, and X*M as predictors (with Y as the dependent) for each of the conditional regressions. Specifically check the unstandardised (

*not*standardised) Beta values for the X predictor (usual t-test of Beta).

Easy enough if you can use SPSS syntax:

For M = +1 with centered Z-values;

COMPUTE ZMHigh = ZM+1

COMPUTE ZX.ZMHigh = ZX*ZMhigh

REGRESSION

/DEPENDENT = Y

/METHOD = ENTER ZX ZMHigh

/METHOD = ENTER ZX.ZMHigh

and do the similar for M-1 (just change the first compute line and label appropriately 'ZMlow'). And you can use something like Excel to plot the slopes.

Hope that helps. Might be worth hunting down something on the net (or I think Howell has an outline of this) - harder to explain in a forum post.

0

reply

X

Page 1 of 1

Go to first unread

Skip to page:

### Quick Reply

Back

to top

to top