In an experimental study, random assignment is a process by which participants are assigned, with the same chance, to either a treatment or a control group. The goal is to assure an unbiased assignment of participants to treatment options.
Random assignment is considered the gold standard for achieving comparability across study groups, and therefore is the best method for inferring a causal relationship between a treatment (or intervention or risk factor) and an outcome.
Random assignment of participants produces comparable groups regarding the participants’ initial characteristics, thereby any difference detected in the end between the treatment and the control group will be due to the effect of the treatment alone.
How does random assignment produce comparable groups?
1. Random assignment prevents selection bias
Randomization works by removing the researcher’s and the participant’s influence on the treatment allocation. So the allocation can no longer be biased since it is done at random, i.e. in a non-predictable way.
This is in contrast with the real world, where for example, the sickest people are more likely to receive the treatment.
2. Random assignment prevents confounding
A confounding variable is one that is associated with both the intervention and the outcome, and thus can affect the outcome in 2 ways:
Or indirectly through the treatment:
This indirect relationship between the confounding variable and the outcome can cause the treatment to appear to have an influence on the outcome while in reality the treatment is just a mediator of that effect (as it happens to be on the causal pathway between the confounder and the outcome).
Random assignment eliminates the influence of the confounding variables on the treatment since it distributes them at random between the study groups, therefore, ruling out this alternative path or explanation of the outcome.
3. Random assignment also eliminates other threats to internal validity
By distributing all threats (known and unknown) at random between study groups, participants in both the treatment and the control group become equally subject to the effect of any threat to validity. Therefore, comparing the outcome between the 2 groups will bypass the effect of these threats and will only reflect the effect of the treatment on the outcome.
These threats include:
- History: This is any event that co-occurs with the treatment and can affect the outcome.
- Maturation: This is the effect of time on the study participants (e.g. participants becoming wiser, hungrier, or more stressed with time) which might influence the outcome.
- Regression to the mean: This happens when the participants’ outcome score is exceptionally good on a pre-treatment measurement, so the post-treatment measurement scores will naturally regress toward the mean — in simple terms, regression happens since an exceptional performance is hard to maintain. This effect can bias the study since it represents an alternative explanation of the outcome.
Note that randomization does not prevent these effects from happening, it just allows us to control them by reducing their risk of being associated with the treatment.
What if random assignment produced unequal groups?
Question: What should you do if after randomly assigning participants, it turned out that the 2 groups still differ in participants’ characteristics? More precisely, what if randomization accidentally did not balance risk factors that can be alternative explanations between the 2 groups? (For example, if one group includes more male participants, or sicker, or older people than the other group).
Short answer: This is perfectly normal, since randomization only assures an unbiased assignment of participants to groups, i.e. it produces comparable groups, but it does not guarantee the equality of these groups.
A more complete answer: Randomization will not and cannot create 2 equal groups regarding each and every characteristic. This is because when dealing with randomization there is still an element of luck. If you want 2 perfectly equal groups, you better match them manually as is done in a matched pairs design (for more information see my article on matched pairs design).
This is similar to throwing a die: If you throw it 10 times, the chance of getting a specific outcome will not be 1/6. But it will approach 1/6 if you repeat the experiment a very large number of times and calculate the average number of times the specific outcome turned up.
So randomization will not produce perfectly equal groups for each specific study, especially if the study has a small sample size. But do not forget that scientific evidence is a long and continuous process, and the groups will tend to be equal in the long run when a meta-analysis aggregates the results of a large number of randomized studies.
So for each individual study, differences between the treatment and control group will exist and will influence the study results. This means that the results of a randomized trial will sometimes be wrong, and this is absolutely okay.
Although the results of a particular randomized study are unbiased, they will still be affected by a sampling error due to chance. But the real benefit of random assignment will be when data is aggregated in a meta-analysis.
Limitations of random assignment
Randomized designs can suffer from:
1. Ethical issues:
Randomization is ethical only if the researcher has no evidence that one treatment is superior to the other.
Also, it would be unethical to randomly assign participants to harmful exposures such as smoking or dangerous chemicals.
2. Low external validity:
With random assignment, external validity (i.e. the generalizability of the study results) is compromised because the results of a study that uses random assignment represent what would happen under “ideal” experimental conditions, which is in general very different from what happens at the population level.
In the real world, people who take the treatment might be very different from those who don’t – so the assignment of participants is not a random event, but rather under the influence of all sort of external factors.
External validity can be also jeopardized in cases where not all participants are eligible or willing to accept the terms of the study.
3. Higher cost of implementation:
An experimental design with random assignment is typically more expensive than observational studies where the investigator’s role is just to observe events without intervening.
Experimental designs also typically take a lot of time to implement, and therefore are less practical when a quick answer is needed.
4. Impracticality when answering non-causal questions:
A randomized trial is our best bet when the question is to find the causal effect of a treatment or a risk factor.
Sometimes however, the researcher is just interested in predicting the probability of an event or a disease given some risk factors. In this case, the causal relationship between these variables is not important, making observational designs more suitable for such problems.
5. Impracticality when studying the effect of variables that cannot be manipulated:
The usual objective of studying the effects of risk factors is to propose recommendations that involve changing the level of exposure to these factors.
However, some risk factors cannot be manipulated, and so it does not make any sense to study them in a randomized trial. For example it would be impossible to randomly assign participants to age categories, gender, or genetic factors.
6. Difficulty to control participants:
These difficulties include:
- Participants refusing to receive the assigned treatment.
- Participants not adhering to recommendations.
- Differential loss to follow-up between those who receive the treatment and those who don’t.
All of these issues might occur in a randomized trial, but might not affect an observational study.
- Shadish WR, Cook TD, Campbell DT. Experimental and Quasi-Experimental Designs for Generalized Causal Inference. 2nd edition. Cengage Learning; 2001.
- Friedman LM, Furberg CD, DeMets DL, Reboussin DM, Granger CB. Fundamentals of Clinical Trials. 5th ed. 2015 edition. Springer; 2015.