Assess Variable Importance in Linear and Logistic Regression

In this article, we will be concerned with the following question:

Given a regression model, which of the predictors X1, X2, X3, etc. has the most influence on the outcome Y?

In general, assessing the relative importance of predictors by directly comparing their (unstandardized) regression coefficients is not a good idea because:

  • For numerical predictors: The regression coefficients will depend on the units of measure of each predictor. For instance, it does not make sense to compare the effect of years of age to that of centimeters in height, nor the effect of 1 mg/dl of blood glucose to that of 1 mmHg of blood pressure.
  • For categorical predictors: The regression coefficients will depend on how the categories were defined. For instance, the coefficient of the variable smoking will depend on how many categories you chose to create for this variable, and how you decided to handle ex-smokers.

Instead, the relative importance of each predictor in the model can be evaluated by:

  1. Comparing standardized regression coefficients
  2. Comparing each predictor’s influence on the model’s accuracy
  3. Comparing the change in a predictor necessary to replicate the effect of another one on the outcome Y
  4. Comparing the change in each predictor required to change the outcome Y by a certain fixed amount
  5. Comparing the change in the outcome Y associated with an arbitrarily fixed change in each predictor

Below we will discuss each of these methods: how they work, their advantages and limitations.

1. Comparing standardized regression coefficients

How it works:

Standardized regression coefficients are obtained by replacing variables in the model by their standardized version.

A standardized variable is a variable rescaled to have a mean of 0 and a standard deviation of 1. This is done by subtracting the mean and dividing by the standard deviation for each value of the variable.

By standardizing the predictors in a regression model, the unit of measure of each becomes its standard deviation. We assume that by measuring all variables in the model using the same unit, these variables will become comparable.

Advantages of using standardized coefficients:

1. Easy to apply and interpret, since the variable with the highest standardized coefficient will be the most important one in the model, and so on.

2. Provides an objective measure of importance unlike other methods (such as some of the methods below) which involve domain knowledge to create some sort of arbitrary common unit based on which the importance of the predictors will be judged.

Limitations of standardized coefficients:

Since the standard deviation of each variable is estimated from the study sample, then it will depend on:

  1. the sample distribution of this variable
  2. the sample size (for small sample sizes the standard deviation will be highly unstable)
  3. the population being studied
  4. the study design

A small change in any of these will affect the value of the standard deviation. An unstable estimation of the standard deviation will cause standardized coefficients to be unreliable, since a variable that has a higher standard deviation will have a bigger standardized coefficient and therefore will appear more important in the model.

ℝ Simulation:
I ran a simulation in R (click to open the code in a new tab) to prove that between 2 variables X1 and X2 that have the same effect on Y (i.e. have the same importance), the one with the higher standard deviation will have a bigger standardized coefficient.

One way to deal with this limitation is to get a more stable estimation of the population standard deviation from another study that has the same design as yours, targets the same population, but has a larger sample size. The other option is to use another method from this list to assess the importance of predictors.

2. Comparing each predictor’s influence on the model’s accuracy

How it works:

For linear regression, you can compare the increase in the model’s R2 that results from adding each predictor, or equivalently compare the drop in R2 for each predictor removed from the model.

For logistic regression, you can compare the drop in deviance that results from adding each predictor to the model.

Advantages of using the model’s accuracy to assess variable importance:

1. R2 and the deviance are independent of the units of measure of each variable.

2. This method provides an objective measure of importance and does not require domain knowledge to apply.

Limitations of using the model’s accuracy to assess variable importance:

1. The increase in R2 (or the drop in deviance) will largely depend on the correlation between predictors (i.e. collinearity). The larger the correlation between 2 predictors, the smaller the contribution of the last one added to the model to the model’s accuracy. So for this method to work, we have to assume an absence of collinearity.

2. The model’s accuracy metrics should not be used to compare variable importance across studies, as Greenland et al. showed that the change in R2 caused by adding a given predictor in the model will differ across studies.

3. Comparing the change in a predictor necessary to replicate the effect of another one on the outcome Y

How it works:

The key idea here is that we are comparing the effect of all predictors in terms of the effect of a single predictor that we chose to consider as reference.

For example, Sharrett et al. compared the contribution of different risk factors to atherosclerosis stages relative to that of LDL cholesterol. So every risk factor was quantified by its LDL equivalent, i.e. the LDL level necessary to produce the same effect on atherosclerosis.

Advantages and limitations of comparing predictors in terms of other predictors:

This method is best used when there is a predictor that can be considered a natural reference. For instance, we can compare the effects of different chemicals on lung cancer relative to smoking (which effect can be considered a reference for all lung carcinogens). Otherwise, use another method to assess variable importance.

4. Comparing the change in each predictor required to change the outcome Y by a certain fixed amount

How it works:

This method consists of choosing a fixed value of the outcome Y (or a fixed change in Y), and then comparing the change in each predictor necessary to produce that fixed outcome.

For example, when it comes to the 10-year risk of death from all causes for a middle age man, becoming a smoker is equivalent to losing 10 years of age [Source: Woloshin et al.].

Advantages and limitations of comparing the change in each predictor necessary to produce a certain fixed effect:

This method is best used when the units of measure of the predictors can be compared, either because they are measured in the same units or because they can be intuitively compared. In our example above, it is intuitive to quantify smoking in terms of years of age lost. Otherwise, you should assess variable importance using another method.

5. Comparing the change in the outcome Y associated with an arbitrarily fixed change in each predictor

How it works:

For each variable you want to compare:

  1. Choose a baseline value: in general, this should represent a normal status (for instance for systolic blood pressure it can be 120mmHg which represents the limit for a normal blood pressure)
  2. Choose 1 or more index value(s): this should represent a value of interest (for instance, for systolic blood pressure we can choose the values 140mmHg and 160mmHg as they represent stage 1 and 2 of hypertension)
  3. Calculate the change in the outcome Y that corresponds to the change of the predictor from the baseline value to the index value

Finally, compare these changes in Y across predictors (or across studies).

Advantages and limitations of comparing the change in Y associated with a fixed change in each X:

Certainly there is some arbitrariness in selecting the baseline and index values, but at least your choice would be based on domain knowledge, unlike standardized coefficients which are subject to uncontrolled arbitrariness.

Final notes

Before comparing the effect of different predictors X1, X2, X3, etc. on the outcome Y remember that:

  • The mechanism of action of each predictor may be different: Predictors in the model may affect the outcome in various ways which makes it inappropriate to compare them.
  • The resistance to change of each predictor should be taken into account: For instance, reducing one’s weight may be theoretically more important for controlling blood pressure than reducing salt intake, but the latter is practically more important because it is easier to change.
  • The interaction between predictors complicates the comparison between them: This is because in case of interaction, a single regression coefficient will not be enough to represent the effect of the predictor on the outcome.

References

  • Szklo M, Nieto FJ. Epidemiology: Beyond the Basics. 4th edition. Jones & Bartlett Learning; 2018.
  • Gelman A. Regression and Other Stories. 1st edition. Cambridge University Press; 2020.

Further reading