# Linear Regression with Interaction in R

```# creating a data frame from the iris data set
dat = data.frame(X = iris\$Petal.Length,
Y = iris\$Sepal.Length,
Z = iris\$Petal.Width)

# linear regression model with interaction between X and Z
summary(lm(Y ~ X + Z + X:Z, data = dat))```

Output:

```Call:
lm(formula = Y ~ X + Z + X:Z, data = dat)

Residuals:
Min       1Q   Median       3Q      Max
-1.00058 -0.25209  0.00766  0.21640  0.89542

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept)  4.57717    0.11195  40.885  < 2e-16 ***
X            0.44168    0.06551   6.742 3.38e-10 ***
Z           -1.23932    0.21937  -5.649 8.16e-08 ***
X:Z          0.18859    0.03357   5.617 9.50e-08 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.3667 on 146 degrees of freedom
Multiple R-squared:  0.8078,	Adjusted R-squared:  0.8039
F-statistic: 204.5 on 3 and 146 DF,  p-value: < 2.2e-16```

So, the linear regression equation is:

`Y = 4.58 + 0.44 X - 1.24 Z + 0.19 X×Z`

## How to interpret this model?

The intercept, 4.58, is the Y value when X = 0 and Z = 0.

The coefficient of X, 0.44, is the change in Y associated with a 1 unit increase in X, when Z = 0. If Z = 0 is implausible, then the effect of X on Y can be interpreted as follows: A 1 unit increase in X changes Y by: 0.44 + 0.19 Z. We can plug in different values of Z to get the effect of X on Y.

The coefficient of Z, -1.24, is the change in Y associated with a 1 unit increase in Z, when X = 0. If X = 0 is implausible, then the effect of Z on Y can be interpreted as follows: A 1 unit increase in Z changes Y by: -1.24 + 0.19 X. We can plug in different values of X to get the effect of Z on Y.

The coefficient of the interaction between X and Z, 0.19, is the increase of effectiveness of X on Y for a 1 unit increase in Z. Or vice-versa, 0.19 is the increase of effectiveness of Z on Y for a 1 unit increase in X.

(For more information, I wrote a separate article on how to interpret interaction terms in linear regression)

## How to decide if the model with interaction is better than the model without interaction?

### 1. Look at the p-value associated with the coefficient of the interaction term:

In our case, the coefficient of the interaction term is statistically significant. This means that there is strong evidence for an interaction between X and Z.

### 2. Compare the R-squared of the model without interaction to that of the model with interaction:

```summary(lm(Y ~ X + Z, data = dat))\$r.squared
# outputs: 0.7662613

summary(lm(Y ~ X + Z + X:Z, data = dat))\$r.squared
# outputs: 0.807802```

In this case, the model without interaction explains 76.6% of the variance in Y. And the model with interaction explains 80.8% of the variance in Y.

This means that the interaction X×Y explains 4.2% of the variance in Y. Which is a substantial effect!

(For more information, see: why and when to include interactions in regression)

⚠ Note:

When you include an interaction between 2 independent variables X and Z, DO NOT remove the main effects of the variables X and Z from the model even if their p-values were larger than 0.05 (i.e. if their effects were not statistically significant).

## References

• James G, Witten D, Hastie T, Tibshirani R. An Introduction to Statistical Learning: With Applications in R. 2nd ed. 2021 edition. Springer; 2021.
• Gelman A, Hill J, Vehtari A. Regression and Other Stories. 1st edition. Cambridge University Press; 2020.