Tuesday, March 18, 2014
Doing resistance exercise to failure is normally recommended for those who want to maximize strength and muscle mass gains from the exercise. Yet, going to failure tends to significantly increase the chances of injury, after which the ability to do resistance exercise is impaired – also impairing gains, in the long term.
From an evolutionary perspective, getting injured is clearly maladaptive. Prey animals that show signs of injury, for example, tend to be targeted by predators. There is also functional loss, which would be reflected in impaired hunting and gathering ability.
So, assuming that going to failure is at least somewhat unnatural, because of a higher likelihood of subsequent injuries, how can it be advisable in the context of resistance exercise?
The graph below is from a study by Izquierdo and colleagues (). They randomly assigned several athletes to two exercise conditions, namely resistance training to failure (RF) and not to failure (NRF). A control group of athletes did not do any resistance exercise. The athletes were tested at four points in time: before the initiation of training (T0), after 6 wk of training (T1), after 11 wk of training (T2), and after 16 wk of training (T3).
The graph above shows the gains in terms of weight lifted in two exercises, the bench press and squat. It is similar to other graphs from the study in that it clearly shows: (a) improvements in the amount of weight lifted over time for both the RF and NRF groups, which reflect gains in strength; and (b) no significant differences in the improvements for the RF and NRF groups.
When you look at the results of the study as a whole, it seems that RF and NRF are associated with slightly greater or lesser gains, depending on the type of exercise and the measure of gains employed. The differences are small, and one can reasonably conclude that no significant difference in overall gains exists between RF and NRF.
It is clear that going to failure leads to increased metabolic stress, and that increased metabolic stress is associated with greater secretion of anabolic hormones (). How can this be reconciled with the lack of a significant difference in gains in the RF and NRF groups?
The graph below provides a hint as to the answer to this question. It shows resting serum cortisol concentrations in the participants. As you can see, after 16 wk of training (T3) cortisol levels are higher in the RF group, which is particularly interesting because the NRF group had higher cortisol levels at baseline (T0). Cortisol is a catabolic hormone, which may in this case counter the effects of the anabolic hormones, even though going to failure is expected to lead to greater anabolic hormone secretion.
It seems that cortisol levels tend to go up over time for those who go to failure, and go down for those who do not. I am not sure if this is a strictly metabolic effect. There may be a psychological component to it, as strength and power gains over time tend to be increasingly more difficult to achieve (see schematic graph below); perhaps leading to some added mental stress as well, as one tries to continue increasing resistance (or weight) while regularly going to failure.
And, of course, it is also possible that the results of the study reviewed here are a statistical “mirage”. The authors explain how they controlled for various possible confounders by adjusting the actual measures. This approach is generally less advisable than controlling for the effects of confounders by including the confounders in a multivariate analysis model ().
Nevertheless, in light of the above I am not so sure that regularly doing resistant exercise to failure is such a good idea.