Thorsten Emig

James Spragg is a young South African exercise physiologist who has carved out an interesting niche for his research. It is based on the idea that the fastest athlete on fresh legs is not necessarily the fastest athlete on fatigued legs, which is an important distinction, as in most endurance races, it is better to be the guy or gal who is fastest on fatigued legs. Yet conventional fitness testing protocols ignore this reality, which is a problem, because it has the potential to skew athletes’ training too far in the direction of improving fresh-legged performance.

In one of his early studies, Spragg teamed up with several other researchers, including Iñigo Mujika, whose name you might recognize from his work related to the 80/20 intensity balance, to compare power profiles in nine members of a U23 cycling team and five professional cyclists. Interestingly, they found that the U23 riders were able to generate as much power as the pros on fresh legs. Had this experiment been limited to non-fatigued performance testing, we would have been left to wonder why the U23 cyclists were not also on professional teams. But what Spragg and his collaborators also found was that, in U23 cyclists, achievable power outputs began to decline after 1,500 to 2,000 kilojoules (about 3,600 to 4,800 calories) of prior work was completed, whereas in professional cyclists, performance fell off only after 3,000 kJ of pedaling.

What’s more, a later study by Dutch and South African researchers found that, among top-tier professional cyclists, those able to do the most work before their power output capacity dropped off performed best in races. So, it appears that the ability to ride fast on tired legs is a key factor separating the best from the rest, both between and within echelons of cycling.

Spragg’s recent study is also his most ambitious to date. It involved collecting power data from every training ride and race completed by 30 U23 professional cyclists over three years. The aim was to determine how individual cyclists’ fresh and fatigued power profiles changed over the course of a competitive season and how these changes related to their training. The main findings were as follows:

  • Fresh power profiles remained relatively stable throughout the season.
  • Fatigued power profiles changed over the course of the season.
  • The difference between fresh and fatigued power profiles also varied as the season unfolded, indicating that the two phenomena are independent.
  • More time spent at low intensity in training predicted better 2-minute power on both fresh and fatigued legs.
  • A shift away from moderate intensity toward high intensity was associated with a stronger fatigued power profile (i.e., a smaller delta between fresh and fatigued power)

An important implication of these findings is that, depending on the type of event an athlete is training for, performing fitness testing in a fresh state may be of limited value. If you specialize in the 400m freestyle event or the 1500m track event, then perhaps testing in a fresh state has greater relevance. But if you’re training for a marathon or an Ironman 70.3, I would imagine that fatigued fitness testing would tell you more. In a narrative review published in October 2021, Spragg, Mujika, and three other colleagues provide detailed recommendations for incorporating fitness testing into training for road cycling events, one of which is to “avoid single effort prediction trials, such as functional threshold power.” As a running and triathlon coach, I personally lean toward using regular workouts to assess fitness. For example, tacking a fast finish onto the end of a long run serves as a good measure of fatigued performance capacity in a marathoner while also functioning as a relevant fitness-builder for the marathon.

Another interesting finding from Spragg’s 2022 study is that cyclists who maintained their peak training load through the late season also maintained their fatigue resistance, whereas those who reduced their training load during this period lost fatigue resistance. This finding is consistent with other studies reporting a correlation between training volume and fatigue resistance/endurance. One example is a 2020 study byThorsten Emig of Paris-Saclay University and Jussi Peltonen of the Polar Corporation, who collected and analyzed training and racing data from devices worn by more than 14,000 runners for a combined 1.6 million exercise sessions. For the purposes of this experiment, endurance was defined as the percentage of VO2max running velocity that a runner could sustain for one hour, and the data showed a strong positive correlation between training volume and endurance thus defined.

I wish all of this science had been available when I wrote 80/20 Running back in 2014. It would have bolstered the argument I made therein about how the typical exercise science study design puts a thumb on the scale in favor of HIIT-focused training when compared against the type of training elite endurance athletes do. It’s less of a problem nowadays, but back then it was common to use fresh-legged VO2max tests as the basis for such comparisons. But we now know that a VO2max test performed after extensive prior exercise is likely to yield different results that are more relevant to real-world race performance, and that high-volume, mostly low-intensity yields better results in pre-fatigued fitness tests.

Oh, well. That’s what second editions are for, right? In the meantime, you can check out our cycling plans here – some are built to improve your FTP and can be used in your off season.

“We can neither deny what science affirms nor affirm what science denies.” I forget who said this, but whoever said it, it’s true. If you’re not so sure about that, it’s likely because you’re misinterpreting the statement as meaning that science is always right about everything. But that’s not at all what it says. What it says is that if you want to be “right” about anything, you must use the scientific method to address whatever it is you want to be right about. For example, if the scientific method is used to arrive at the conclusion that earth’s climate is changing, and that human activity is the primary driver of that change, then no one should put any stock in a denial of this conclusion unless it, too, is arrived at through the use of the scientific method. Even if it turns out that earth’s climate is not changing or that human activity is not the primary driver of that change, a person whose reason for denying the current scientific consensus on this matter is that it snowed in April one time last year is not really “right,” or is right only in the sense that the stopped clock is right twice a day. Indeed, the only way it could really “turn out” that earth’s climate is not changing or that human activity is not the cause of that change is for science itself to come to this new conclusion.

The scientific method is really nothing more, and nothing less, than intellectual integrity. By nature, individual human beings tend to form highly biased beliefs. A highly biased belief can be true, but in general, biased beliefs are unreliable. The scientific method was developed as a way to remove bias from the process of belief formation as much as possible. It is by no means a perfectly reliable method of forming beliefs, but it is more reliable than any other method.

Granted, the applicability of the scientific method is limited. It cannot be used to settle questions such as whether the Beatles are better than the Rolling Stones or whether prisoners should be allowed to vote—in other words, aesthetic or moral questions. Science is also of limited value in the domain of real-world problem solving. For example, I’d put more trust in an experienced general with a record of winning battles to win the next battle than in a scientist who came up with a new strategy for winning battles by running a bunch of computer simulations.

Endurance sports training is another example. Historically, elite coaches and athletes have been way out ahead of the scientists with respect to identifying the methods that do and don’t work. The crucible of international competition is not a controlled study, but it’s enough like one in its ruthless determination of winners and losers to have given lower-level coaches and athletes like me a high degree of confidence in their beliefs about the best way to train. In contrast, it’s actually surprisingly difficult to design and execute a controlled scientific study that has any substantive relevance to real-world endurance training. For example, one of the greatest certainties of endurance training is that high-volume training is essential to maximizing fitness and performance, yet there is virtually zero scientific evidence to support this certainty because it’s impractical to execute the kind of strictly controlled, long-term prospective study needed to supply such evidence.

But things are changing. The advent of wearable devices has made it possible for sport scientists to take a “big data” approach to investigating what works and what doesn’t in endurance training. In this approach, scientists dispense with the familiar tools of generating hypotheses and then testing them by actively intervening in the training of a small group of athletes and instead just collect relevant data from very large numbers of athletes and use statistical tools to quantify correlations between particular inputs (e.g., training volume) and specific outputs (e.g., marathon performance). While this approach lacks the tidiness of the traditional controlled study, it has the potential to yield results that have equal empirical validity by virtue of the sheer volume of data involved. And because these studies are done in situ, they do not share the controlled prospective study’s questionable real-world relevance.

The Science of Running

As an experienced endurance coach who respects science, I have long been highly circumspect in using science to inform my coaching practices. I always check new science against what I know from real-world experience before I incorporate it into my coaching practice. But studies based on the big-data approach are my kind of science because they’re really just a formalized version of the learning we coaches do in the real world.

So I was particularly excited to see a new study titled “Human Running Performance from Real-World Big Data” in the journal Nature. It’s a true landmark investigation, drawing observations from data representing 1.6 million exercise sessions completed by roughly 14,000 individuals. Its authors, Thorsten Emig of Paris-Saclay University and Jussi Peltonen of the Polar Corporation, are clearly very smart guys who understand both statistics and running. The paper is highly readable even for laypersons like myself, and it’s also available free online, so I won’t belabor its finer points here. What I will say is that its three key findings squarely corroborate the conclusions that elite coaches and athletes have come to heuristically over the past 150 years of trying stuff. Here they are:

Key Finding #1 – Running More Is the Best Way to Run Faster

One of the key variables in the performance model developed by Emig and Peltonen is speed at maximal aerobic power (roughly equivalent to velocity at VO2max), which they are able to “extract” from race performance data. The collaborators found that the strongest training predictor of this variable was mileage. Simply put, runners who ran more were fitter and raced faster. Emig and Peltonen speculated that high-mileage training achieved this effect principally by improving running economy.

Key Finding #2 – There Is No Such Thing As Too Slow in Easy Runs

Another clear pattern in the data collected by Emig and Peltonen was that runners with a higher MAP speed tended to spend more time training at lower percentages of this speed. In other words, faster runners tended to train slower relative to their ability. As an example, the collaborators tell us that a runner with a MAP speed of 4 meters per second (6:42/mile) will do most of their training between 64 and 84 percent of this speed, whereas a runner with a MAP of 5 meters per second (5:21/mile) will cap their easy runs at 66 percent of this speed. Here we have clear validation of the 80/20 rule of intensity balance, which I always like to see.

Key Finding #3 – Training Load Is Not the Gift That Keeps on Giving

Perhaps the “freshest” key finding of this study is one that validates the practice of training in macrocycles not exceeding several months in length. What Emig and Peltonen discovered on this front was that individual runners appeared to have an optimal cumulative training load representing the accumulated seasonal volume and intensity of training that yielded maximal fitness and performance. Runners gained fitness in linear fashion as the season unfolded and as they approached this total, but when they went beyond it, their fitness regressed. In short, training is not the gift that keeps on giving. Runners can train only so much and get only so fit before they need a break.

That’s science.

$ubscribe and $ave!

  • Access to over 600 plans
  • Library of 5,000+ workouts
  • TrainingPeaks Premium
  • An 80/20 Endurance Book

 

30 day money back guarentee

For as little as $2.32 USD per week, 80/20 Endurance Subscribers receive:

  • 30-day Money Back Guarantee