I'm excited to share my new publication with Steve Barrett & Chris Towlson. This study has been a 7-year journey! As well as the key talking points from the paper, I wanted to be transparent about the inspiration, challenges & realities along the way.
Our original investigation, available online and open access here on PLoS ONE, is entitled:
“Measurement properties of external training load variables during standardised games in soccer: Implications for training and monitoring strategies”
Clubb J, Towlson C, Barrett S (2022) Measurement properties of external training load variables during standardised games in soccer: Implications for training and monitoring strategies. PLoS ONE 17(1): e0262274. https://doi.org/10.1371/journal.pone.0262274
The idea for this study was born back in 2015 in conversations with Stu Cormack, inspired by the late Nick Broad's work. He was driven to find solutions to "test without testing" and, in his absence, we wanted to take up that mantle.
Nick's ambition in this area also inspired others. Mathieu Lacome, Martin Buchheit, and Ben Simpson published this study in IJSPP in 2018, citing Nick as an author, investigating predicted heart rate responses to football drills as a measure of readiness.
We set out to explore the use of a standardised game as a measure of readiness. The training drill utilised the same rules, player numbers, pitch size, time on/off, sets and reps, and timing within the mesocycle. Can changes in external load measures be indicative of fatigue?
In its original format, the study was rejected by 3 different journals (despite multiple submissions). Despite much positive feedback especially regarding the novelty of the study (at the time), there were a number of concerns. Though justifiable, the concerns were difficult to overcome particularly in an applied study that utilised data collected as part of everyday processes. These included the number of trials, lack of internal load measures & the integration of external load from different tracking systems.
We were able to address these concerns in a new study in a different setting, thanks to Amber Rowell, Rob Aughey, and Stu Cormack:
In 2018, Steve Barrett encouraged me to revisit the study. Given the limitations in training load measures, we changed the focus of the investigation to explore the measurement properties of the standardised game. A dataset was added from another setting with a similar drill.
If standardised training drills are potentially to be used as in-situ tests of fatigue, we need to understand the repeatability of the load measures across trials. Therefore, we investigated the reliability and sensitivity of external load across standardised 10v10 11v11 & 7v7+6s.
Peer review is an important part of the scientific process but not without its challenges. Social media can gloss over these challenges too! The tweets of acceptance do not normally share the revisions and rejections involved! This paper was rejected by 6 other journals.
With the additional help of Chris Towlson and trying to take on board all the feedback of all the reviewers to date, I'm pleased to share this work via PLOS ONE. I believe the study is certainly now in its best format and hopefully adds value to the literature. Thank you reviewers!
Key takeaways: cumulative measures of distance and "PlayerLoad" demonstrated good reliability on a group level across trials of standardised training drills but high-speed running does not. However, within-subject reliability also needs to be understood...
Some athletes demonstrate more repeatable external load outputs than others. In line with the push to "reveal your data", we used a violin plot to visualise the within-subject CV% of the different external load measures across the different game formats.
If you are trying to understand the signal and noise across trials of a standardised training drill in your applied setting, potentially for insights into fatigue/readiness, we recommend calculating reliability on an individual level.
If anyone is interested in conducting a study on this in their applied setting, please do get in touch! We would love to continue this work and explore the ability to "test without testing".
Comments