“We can neither deny what science affirms nor affirm what science denies.” I forget who said this, but whoever said it, it’s true. If you’re not so sure about that, it’s likely because you’re misinterpreting the statement as meaning that science is always right about everything. But that’s not at all what it says. What it says is that if you want to be “right” about anything, you must use the scientific method to address whatever it is you want to be right about. For example, if the scientific method is used to arrive at the conclusion that earth’s climate is changing, and that human activity is the primary driver of that change, then no one should put any stock in a denial of this conclusion unless it, too, is arrived at through the use of the scientific method. Even if it turns out that earth’s climate is not changing or that human activity is not the primary driver of that change, a person whose reason for denying the current scientific consensus on this matter is that it snowed in April one time last year is not really “right,” or is right only in the sense that the stopped clock is right twice a day. Indeed, the only way it could really “turn out” that earth’s climate is not changing or that human activity is not the cause of that change is for science itself to come to this new conclusion.
The scientific method is really nothing more, and nothing less, than intellectual integrity. By nature, individual human beings tend to form highly biased beliefs. A highly biased belief can be true, but in general, biased beliefs are unreliable. The scientific method was developed as a way to remove bias from the process of belief formation as much as possible. It is by no means a perfectly reliable method of forming beliefs, but it is more reliable than any other method.
Granted, the applicability of the scientific method is limited. It cannot be used to settle questions such as whether the Beatles are better than the Rolling Stones or whether prisoners should be allowed to vote—in other words, aesthetic or moral questions. Science is also of limited value in the domain of real-world problem solving. For example, I’d put more trust in an experienced general with a record of winning battles to win the next battle than in a scientist who came up with a new strategy for winning battles by running a bunch of computer simulations.
Endurance sports training is another example. Historically, elite coaches and athletes have been way out ahead of the scientists with respect to identifying the methods that do and don’t work. The crucible of international competition is not a controlled study, but it’s enough like one in its ruthless determination of winners and losers to have given lower-level coaches and athletes like me a high degree of confidence in their beliefs about the best way to train. In contrast, it’s actually surprisingly difficult to design and execute a controlled scientific study that has any substantive relevance to real-world endurance training. For example, one of the greatest certainties of endurance training is that high-volume training is essential to maximizing fitness and performance, yet there is virtually zero scientific evidence to support this certainty because it’s impractical to execute the kind of strictly controlled, long-term prospective study needed to supply such evidence.
But things are changing. The advent of wearable devices has made it possible for sport scientists to take a “big data” approach to investigating what works and what doesn’t in endurance training. In this approach, scientists dispense with the familiar tools of generating hypotheses and then testing them by actively intervening in the training of a small group of athletes and instead just collect relevant data from very large numbers of athletes and use statistical tools to quantify correlations between particular inputs (e.g., training volume) and specific outputs (e.g., marathon performance). While this approach lacks the tidiness of the traditional controlled study, it has the potential to yield results that have equal empirical validity by virtue of the sheer volume of data involved. And because these studies are done in situ, they do not share the controlled prospective study’s questionable real-world relevance.
The Science of Running
As an experienced endurance coach who respects science, I have long been highly circumspect in using science to inform my coaching practices. I always check new science against what I know from real-world experience before I incorporate it into my coaching practice. But studies based on the big-data approach are my kind of science because they’re really just a formalized version of the learning we coaches do in the real world.
So I was particularly excited to see a new study titled “Human Running Performance from Real-World Big Data” in the journal Nature. It’s a true landmark investigation, drawing observations from data representing 1.6 million exercise sessions completed by roughly 14,000 individuals. Its authors, Thorsten Emig of Paris-Saclay University and Jussi Peltonen of the Polar Corporation, are clearly very smart guys who understand both statistics and running. The paper is highly readable even for laypersons like myself, and it’s also available free online, so I won’t belabor its finer points here. What I will say is that its three key findings squarely corroborate the conclusions that elite coaches and athletes have come to heuristically over the past 150 years of trying stuff. Here they are:
Key Finding #1 – Running More Is the Best Way to Run Faster
One of the key variables in the performance model developed by Emig and Peltonen is speed at maximal aerobic power (roughly equivalent to velocity at VO2max), which they are able to “extract” from race performance data. The collaborators found that the strongest training predictor of this variable was mileage. Simply put, runners who ran more were fitter and raced faster. Emig and Peltonen speculated that high-mileage training achieved this effect principally by improving running economy.
Key Finding #2 – There Is No Such Thing As Too Slow in Easy Runs
Another clear pattern in the data collected by Emig and Peltonen was that runners with a higher MAP speed tended to spend more time training at lower percentages of this speed. In other words, faster runners tended to train slower relative to their ability. As an example, the collaborators tell us that a runner with a MAP speed of 4 meters per second (6:42/mile) will do most of their training between 64 and 84 percent of this speed, whereas a runner with a MAP of 5 meters per second (5:21/mile) will cap their easy runs at 66 percent of this speed. Here we have clear validation of the 80/20 rule of intensity balance, which I always like to see.
Key Finding #3 – Training Load Is Not the Gift That Keeps on Giving
Perhaps the “freshest” key finding of this study is one that validates the practice of training in macrocycles not exceeding several months in length. What Emig and Peltonen discovered on this front was that individual runners appeared to have an optimal cumulative training load representing the accumulated seasonal volume and intensity of training that yielded maximal fitness and performance. Runners gained fitness in linear fashion as the season unfolded and as they approached this total, but when they went beyond it, their fitness regressed. In short, training is not the gift that keeps on giving. Runners can train only so much and get only so fit before they need a break.
That’s science.