Looking Beyond Training Load in Injury Risk Forecasting
Updated: Aug 6
This post originally appeared here on Zone 7's Performance Hub.
I read with great interest Zone7’s recent white paper on ‘The Importance of Workload, Strength, Recovery and Environmental Context Data in an Injury Risk Forecasting Model’.
Sports practitioners face a tsunami of technology and data collection options. Weighing up the value against the implementation burden (for both athletes and practitioners) can guide sports performance practitioners in developing a data strategy that optimises outcomes. Therefore, can artificial intelligence (AI) help us to determine and review our data ecosystem? And, what should we do if a data stream does not improve model performance?
Let’s take a closer look at the findings discussed in the Zone7 white paper and how practitioners might use them as part of their own data strategy.
The Core Data Inputs
Firstly, it comes as no surprise to see the three “must have” data streams that make up Zone7’s baseline models. These core inputs are selected because of their availability in almost all environments, and they create a stable, baseline model to compare subsequent models to. They are:
Sports specific context such as competitive schedule and periodisation approaches.
It is widely acknowledged that the strongest risk factor for future injury is previous injury. Therefore, injury history clearly lays the foundation for injury risk forecasting. Next, external load data is now widely collected throughout professional sports, both in competition and training. This is pertinent given that, according to Windt and Gabbett (2016), load affects injury risk in three ways:
load is the vehicle in which athletes are exposed to external risk factors and potential inciting events
load drives positive adaptation to training i.e. fitness, which can improve modifiable internal risk factors (e.g., aerobic capacity, body composition)
load causes negative consequences to training i.e. fatigue, which can temporarily decrease capacity in modifiable internal risk factors (e.g., tissue resilience, neuromuscular control)
Finally, understanding the sport, playing position, and individual athlete demands, within the context of each team’s periodisation and philosophy, is fundamental to executing suitable analysis, as previously explored here.
These three inputs provide the bedrock for athlete risk management. Yet, there are a multitude of other potential avenues for data collection. If a practitioner can understand the most pertinent data streams, then collection, analysis, and dissemination resources can be focused on these areas.
Strengthening the Model with other Data Streams
Strength is a moderator that can provide athletes with better tolerance to workload. This was demonstrated by Shane Malone and his colleagues for example, in this study with amateur hurling players.
Specific muscle-group strength has also been widely researched and debated, probably none more so than eccentric hamstring strength. One prospective cohort study of over 150 male elite soccer players (Timmins et al., 2016) showed low levels of eccentric hamstring strength, along with the presence of short biceps femoris long head fascicles, significantly increased the risk of future hamstring injury.
As such, it was interesting to read that the inclusion of data from Nordic hamstring exercise (also known as Russian Leans) testing in one Major League Soccer team improved model performance. Specifically, sensitivity, i.e. the true positive rate, was improved, both in injury detection and volume of days lost forecast without a loss in specificity. If unfamiliar with these terms, take a look at our AI Dictionary. The inclusion of these measures also adds to the context included in the system’s rationale and/or recommendations, adding to the platform’s explainability.
Admittedly, such an improvement may not always be seen. In which case, would that mean that such data is not valuable? Absolutely not. Such strength testing data, and moreover actually carrying out the exercises regardless of data collection, can have a positive effect on injury risk reduction.
Another area of interest is neuromuscular fatigue tracking, such as via countermovement jumps (CMJ). Zone7 highlighted one case whereby variations in CMJ outcomes were the main factors contributing to injury risk in their model. Again, this information can be useful for the day-to-day decision making for the practitioner, providing objective information on an athlete’s dose-response. Plus, it may also provide an additional bonus if such data improves AI model performance.
A factor that will greatly influence a data stream’s influence on model performance is collection cadence. Consistent frequency may be required to uplift the model. But we have to respect that data collection cannot be unlimited. I know teams who previously used CMJ tests multiple times a week, but found it too onerous on both athletes and staff. It comes down to finding the best compromise between value and burden. A potential solution for this may be integrating invisible monitoring, whereby data is collected as part of the training session. This could be one approach to increasing the cadence of jump and/or hamstring strength data collection, as they are integrated into gym sessions and measured at the same time.
A Refocus on Internal Workload?
I was also greatly interested to read the Zone7 case study that showed uplift in model performance with heart rate data. While heart rate monitoring was the central component of many athlete monitoring systems in years gone by, internal workload has fallen out of fashion in recent times. This is probably due to the ease of many external load tracking technologies that require minimal to no athlete compliance (e.g., optical tracking cameras). However, internal and external load represent different constructs.
While external load is the physical work completed, internal load represents the physiological cost (Halson, 2014). Taken together these can therefore provide insight into fitness-fatigue status. Given the coupling of physiological and psychological stressors to the body - stress is stress whatever the cause - internal load may provide a more holistic and individualised insight into a training stimulus. Whereas, when we focus purely on distances covered and other mechanical measures, we remove the athlete’s internal experience of the training load.
I suspect the technology evolution and rise of the quantified self will add to internal load data collection. Athletes likely now have access to their own heart rate data through personal devices, such as smart watches and rings. If such technology is sufficiently valid, these may provide heart rate without the need for belts. As mentioned in their white paper, Zone7 have already integrated consumer devices from the likes of Polar, Firstbeat, Oura, and Garmin. Plus, I’m sure the emergence of smart clothing will further evolve this area.
Of course, we must remember the Garbage In Garbage Out (GIGO) maxim: poorly collected heart rate data will be of minimal value. So practitioners should give critical thought as to whether the data can be collected in a valid and reliable manner within their specific environment. In the absence of valid and reliable heart rate data, practitioners may want to consider subjective measures of internal load.
The Debate on Subjective Measures
I understand Zone7’s caution around subjective measures, such as wellness and rating of perceived exertion (RPE). From a modelling perspective, objective measures are preferred: subjective measures are vulnerable to human error, agenda, and compliance issues. However, I argue they are an important and valuable aspect of a comprehensive load monitoring programme.
Including perceptual measures incorporates the athlete’s internal experience of the training load, as discussed a moment ago. When the perception of load is greater than in reality, this can be a flag to the athlete’s fitness-fatigue status. In addition, I think collecting subjective data provides the athlete with a voice; it brings them into the training process and offers the opportunity to provide feedback.
Of course, buy-in is essential to collecting meaningful subjective information and to achieve that, education and communication should be at the forefront. A recent study of semi-structured interviews with athletes confirmed that athletes report being dishonest with self-reported data (Coventry et al., 2023). This appears driven by a high burden and lack of transparency around the process. Explaining the why to our athletes and making the process as streamlined as possible can both boost buy-in, which can in turn improve data quality. Perhaps in this kind of an environment, the subjective data would even be of benefit to model performance!
We have a responsibility to be critical and intentional with the data we collect from our athletes. This involves respecting the time burden we place on them, along with the potential physical and/or mental burdens associated with testing and data collection. Understanding the most valuable measures can streamline our data strategy and support effective and efficient decision making.
However, we must also remember many data inputs may add value to the environment even if they do not improve model performance. Strength measures can guide programming. Response measures can guide recovery and treatment. Subjective measures can give the athletes an avenue for feedback and self-reporting. Just because the computer says no, doesn’t mean you have to as well. That said, it is exciting to see analytics that can quantify the value of our data collection processes and empower practitioners to objectively craft an internal data strategy.