Measuring children's self-reported sport participation, risk perception and injury history : Development and validation of a survey instrument
- Siesmaa, Emma, Blitvich, Jennifer, White, Peta, Finch, Caroline
- Authors: Siesmaa, Emma , Blitvich, Jennifer , White, Peta , Finch, Caroline
- Date: 2011
- Type: Text , Journal article
- Relation: Journal of Science and Medicine in Sport Vol. 14, no. 1 (2011), p. 22-26
- Relation: http://purl.org/au-research/grants/nhmrc/565900
- Full Text:
- Reviewed:
- Description: Despite the health benefits associated with children's sport participation, the occurrence of injury in this context is common. The extent to which sport injuries impact children's ongoing involvement in sport is largely unknown. Surveys have been shown to be useful for collecting children's injury and sport participation data; however, there are currently no published instruments which investigate the impact of injury on children's sport participation. This study describes the processes undertaken to assess the validity of two survey instruments for collecting self-reported information about child cricket and netball related participation, injury history and injury risk perceptions, as well as the reliability of the cricket-specific version. Face and content validity were assessed through expert feedback from primary and secondary level teachers and from representatives of peak sporting bodies for cricket and netball. Test-retest reliability was measured using a sample of 59 child cricketers who completed the survey on two occasions, 3-4 weeks apart. Based on expert feedback relating to face and content validity, modification and/or deletion of some survey items was undertaken. Survey items with low test-retest reliability (κ≤ 0.40) were modified or deleted, items with moderate reliability (κ=0.41-0.60) were modified slightly and items with higher reliability (κ≥ 0.61) were retained, with some undergoing minor modifications. This is the first survey of its kind which has been successfully administered to cricketers aged 10-16 years to collect information about injury risk perceptions and intentions for continued sport participation. Implications for its generalisation to other child sport participants are discussed. © 2010 Sports Medicine Australia.
- Authors: Siesmaa, Emma , Blitvich, Jennifer , White, Peta , Finch, Caroline
- Date: 2011
- Type: Text , Journal article
- Relation: Journal of Science and Medicine in Sport Vol. 14, no. 1 (2011), p. 22-26
- Relation: http://purl.org/au-research/grants/nhmrc/565900
- Full Text:
- Reviewed:
- Description: Despite the health benefits associated with children's sport participation, the occurrence of injury in this context is common. The extent to which sport injuries impact children's ongoing involvement in sport is largely unknown. Surveys have been shown to be useful for collecting children's injury and sport participation data; however, there are currently no published instruments which investigate the impact of injury on children's sport participation. This study describes the processes undertaken to assess the validity of two survey instruments for collecting self-reported information about child cricket and netball related participation, injury history and injury risk perceptions, as well as the reliability of the cricket-specific version. Face and content validity were assessed through expert feedback from primary and secondary level teachers and from representatives of peak sporting bodies for cricket and netball. Test-retest reliability was measured using a sample of 59 child cricketers who completed the survey on two occasions, 3-4 weeks apart. Based on expert feedback relating to face and content validity, modification and/or deletion of some survey items was undertaken. Survey items with low test-retest reliability (κ≤ 0.40) were modified or deleted, items with moderate reliability (κ=0.41-0.60) were modified slightly and items with higher reliability (κ≥ 0.61) were retained, with some undergoing minor modifications. This is the first survey of its kind which has been successfully administered to cricketers aged 10-16 years to collect information about injury risk perceptions and intentions for continued sport participation. Implications for its generalisation to other child sport participants are discussed. © 2010 Sports Medicine Australia.
Injury risk associated with ground hardness in junior cricket
- Twomey, Dara, White, Peta, Finch, Caroline
- Authors: Twomey, Dara , White, Peta , Finch, Caroline
- Date: 2011
- Type: Text , Journal article
- Relation: Journal of Science and Medicine in Sport Vol.15 , no.2 (2011), p.110-115
- Relation: http://purl.org/au-research/grants/nhmrc/565900
- Full Text:
- Reviewed:
- Description: To establish if there is an association between ground hardness and injury risk in junior cricket. Nested case-series of players who played matches on specific grounds with objective ground hardness measures, within a prospective cohort study of junior community club cricket players. Monitoring of injuries and playing exposure occurred during 434 matches over the 2007/2008 playing season. Objective assessment of the hardness of 38 grounds was undertaken using a Clegg hammer at 13 sites on 19 different junior cricket grounds on the match eve across the season. Hardness readings were classified from unacceptably low (<30 g) to unacceptably high (>120 g) and two independent raters assessed the likelihood of each injury being related to ground hardness. Injuries sustained on tested grounds were related to the ground hardness measures. Overall, 31 match injuries were reported; 6.5% were rated as likely to be related to ground hardness, 16.1% as possibly related and 74.2% as unlikely to be related and 3.2% unknown. The two injuries likely to be related to ground hardness were sustained while diving to catch a ball resulting, in a graze/laceration from contact with hard ground. Overall, 31/38 (82%) ground assessments were rated as having 'unacceptably high' hardness and all others as 'high/normal' hardness. Only one injury occurred on an objectively tested ground. It remains unclear if ground hardness is a contributing factor to the most common injury mechanism of being struck by the ball, and needs to be confirmed in future larger-scale studies. © 2011 Sports Medicine Australia.
- Authors: Twomey, Dara , White, Peta , Finch, Caroline
- Date: 2011
- Type: Text , Journal article
- Relation: Journal of Science and Medicine in Sport Vol.15 , no.2 (2011), p.110-115
- Relation: http://purl.org/au-research/grants/nhmrc/565900
- Full Text:
- Reviewed:
- Description: To establish if there is an association between ground hardness and injury risk in junior cricket. Nested case-series of players who played matches on specific grounds with objective ground hardness measures, within a prospective cohort study of junior community club cricket players. Monitoring of injuries and playing exposure occurred during 434 matches over the 2007/2008 playing season. Objective assessment of the hardness of 38 grounds was undertaken using a Clegg hammer at 13 sites on 19 different junior cricket grounds on the match eve across the season. Hardness readings were classified from unacceptably low (<30 g) to unacceptably high (>120 g) and two independent raters assessed the likelihood of each injury being related to ground hardness. Injuries sustained on tested grounds were related to the ground hardness measures. Overall, 31 match injuries were reported; 6.5% were rated as likely to be related to ground hardness, 16.1% as possibly related and 74.2% as unlikely to be related and 3.2% unknown. The two injuries likely to be related to ground hardness were sustained while diving to catch a ball resulting, in a graze/laceration from contact with hard ground. Overall, 31/38 (82%) ground assessments were rated as having 'unacceptably high' hardness and all others as 'high/normal' hardness. Only one injury occurred on an objectively tested ground. It remains unclear if ground hardness is a contributing factor to the most common injury mechanism of being struck by the ball, and needs to be confirmed in future larger-scale studies. © 2011 Sports Medicine Australia.
Facilitators to support the implementation of injury prevention training in youth handball : A concept mapping approach
- Ageberg, Eva, Bunke, Sofia, Lucander, Karolina, Nilsen, Per, Donaldson, Alex
- Authors: Ageberg, Eva , Bunke, Sofia , Lucander, Karolina , Nilsen, Per , Donaldson, Alex
- Date: 2019
- Type: Text , Journal article
- Relation: Scandinavian Journal of Medicine and Science in Sports Vol. 29, no. 2 (2019), p. 275-285
- Full Text:
- Reviewed:
- Description: There is a need for research to identify effective implementation strategies for injury prevention training within real-world community sports. The aim of this ecological participatory study was to identify facilitators, among stakeholders at multiple levels, that could help injury prevention training become part of regular training routines in youth team handball. Concept mapping, a mixed-method approach for qualitative data collection and quantitative data analysis, was used. Stakeholders (n = 196) of two community team handball clubs (29% players, 13% coaches, 38% caregivers, 11% club, district and national handball administrators, 9% unknown) participated in a brainstorming process. After the research team synthesized the 235 generated statements, 50 stakeholders (34% players, 22% coaches, 24% caregivers, 20% administrators) sorted 89 unique facilitator statements into clusters and rated them for importance and feasibility. Multidimensional scaling and hierarchical cluster analysis yielded five clusters (stress value 0.231): “Understanding and applying knowledge,” “Education, knowledge, and consistency,” “Set-up and exercises,” “Inspiration, motivation, and routines,” and “Club policy and expert collaboration.” The cluster “Understanding and applying knowledge” had the highest mean importance (3.17 out of 4) and feasibility (2.93) ratings. The 32 statements rated as both highly important and feasible (Go-zone) indicate action is required at the individual (end-users) and organizational (policymakers) levels to implement injury prevention training. Results suggest that developing evidence-based context-specific injury prevention training, incorporating physiological, biomechanical and psychological components, and an associated context-specific implementation plan in partnership with all stakeholders should be a high priority to facilitate the implementation of injury prevention training in youth team handball.
- Authors: Ageberg, Eva , Bunke, Sofia , Lucander, Karolina , Nilsen, Per , Donaldson, Alex
- Date: 2019
- Type: Text , Journal article
- Relation: Scandinavian Journal of Medicine and Science in Sports Vol. 29, no. 2 (2019), p. 275-285
- Full Text:
- Reviewed:
- Description: There is a need for research to identify effective implementation strategies for injury prevention training within real-world community sports. The aim of this ecological participatory study was to identify facilitators, among stakeholders at multiple levels, that could help injury prevention training become part of regular training routines in youth team handball. Concept mapping, a mixed-method approach for qualitative data collection and quantitative data analysis, was used. Stakeholders (n = 196) of two community team handball clubs (29% players, 13% coaches, 38% caregivers, 11% club, district and national handball administrators, 9% unknown) participated in a brainstorming process. After the research team synthesized the 235 generated statements, 50 stakeholders (34% players, 22% coaches, 24% caregivers, 20% administrators) sorted 89 unique facilitator statements into clusters and rated them for importance and feasibility. Multidimensional scaling and hierarchical cluster analysis yielded five clusters (stress value 0.231): “Understanding and applying knowledge,” “Education, knowledge, and consistency,” “Set-up and exercises,” “Inspiration, motivation, and routines,” and “Club policy and expert collaboration.” The cluster “Understanding and applying knowledge” had the highest mean importance (3.17 out of 4) and feasibility (2.93) ratings. The 32 statements rated as both highly important and feasible (Go-zone) indicate action is required at the individual (end-users) and organizational (policymakers) levels to implement injury prevention training. Results suggest that developing evidence-based context-specific injury prevention training, incorporating physiological, biomechanical and psychological components, and an associated context-specific implementation plan in partnership with all stakeholders should be a high priority to facilitate the implementation of injury prevention training in youth team handball.
The reliability and sensitivity of performance measures in a novel pace-bowling test
- Feros, Simon, Young, Warren, O’Brien, Brendan
- Authors: Feros, Simon , Young, Warren , O’Brien, Brendan
- Date: 2018
- Type: Text , Journal article
- Relation: International Journal of Sports Physiology and Performance Vol. 13, no. 2 (2018), p. 151-155
- Full Text:
- Reviewed:
- Description: Objectives: To evaluate the reliability and sensitivity of performance measures in a novel pace-bowling test. Methods: Thirteen male amateur-club fast bowlers completed a novel pace-bowling test on 2 separate occasions, 4–7 d apart. Participants delivered 48 balls (8 overs) at 5 targets on a suspended sheet situated behind a live batter, who stood in a right-handed and left-handed stance for an equal number of deliveries. Delivery instruction was frequently changed, with all deliveries executed in a preplanned sequence. Data on ball-release speed were captured by radar gun. A high-speed camera captured the moment of ball impact on the target sheet for assessment of radial error and bivariate variable error. Delivery rating of perceived exertion (0–100%) was collected as a measure of intensity. Results: Intraclass correlation coefficients and coefficients of variation revealed excellent reliability for peak and mean ball-release speed, acceptable reliability for delivery rating of perceived exertion, and poor reliability for mean radial error, bivariate variable error, and variability of ball-release speed. The smallest worthwhile change indicated high sensitivity with peak and mean ball-release speed and lower sensitivity with mean radial error and bivariate variable error. Conclusions: The novel pace-bowling test incorporates improvements in ecological validity compared with its predecessors and can be used to provide a more comprehensive evaluation of pace-bowling performance. Data on the smallest worthwhile change can improve interpretation of pace-bowling research findings and may therefore influence recommendations for applied practice. © 2018 Human Kinetics, Inc.
- Authors: Feros, Simon , Young, Warren , O’Brien, Brendan
- Date: 2018
- Type: Text , Journal article
- Relation: International Journal of Sports Physiology and Performance Vol. 13, no. 2 (2018), p. 151-155
- Full Text:
- Reviewed:
- Description: Objectives: To evaluate the reliability and sensitivity of performance measures in a novel pace-bowling test. Methods: Thirteen male amateur-club fast bowlers completed a novel pace-bowling test on 2 separate occasions, 4–7 d apart. Participants delivered 48 balls (8 overs) at 5 targets on a suspended sheet situated behind a live batter, who stood in a right-handed and left-handed stance for an equal number of deliveries. Delivery instruction was frequently changed, with all deliveries executed in a preplanned sequence. Data on ball-release speed were captured by radar gun. A high-speed camera captured the moment of ball impact on the target sheet for assessment of radial error and bivariate variable error. Delivery rating of perceived exertion (0–100%) was collected as a measure of intensity. Results: Intraclass correlation coefficients and coefficients of variation revealed excellent reliability for peak and mean ball-release speed, acceptable reliability for delivery rating of perceived exertion, and poor reliability for mean radial error, bivariate variable error, and variability of ball-release speed. The smallest worthwhile change indicated high sensitivity with peak and mean ball-release speed and lower sensitivity with mean radial error and bivariate variable error. Conclusions: The novel pace-bowling test incorporates improvements in ecological validity compared with its predecessors and can be used to provide a more comprehensive evaluation of pace-bowling performance. Data on the smallest worthwhile change can improve interpretation of pace-bowling research findings and may therefore influence recommendations for applied practice. © 2018 Human Kinetics, Inc.
How much is enough in rehabilitation? High running workloads following lower limb muscle injury delay return to play but protect against subsequent injury
- Stares, Jordan, Dawson, Brian, Peeling, Peter, Drew, Michael, Heasman, Jarryd, Rogalski, Brent, Colby, Marcus
- Authors: Stares, Jordan , Dawson, Brian , Peeling, Peter , Drew, Michael , Heasman, Jarryd , Rogalski, Brent , Colby, Marcus
- Date: 2018
- Type: Text , Journal article
- Relation: Journal of Science and Medicine in Sport Vol. 21, no. 10 (2018), p. 1019-1024
- Full Text:
- Reviewed:
- Description: Objectives: Examine the influence of rehabilitation training loads on return to play (RTP) time and subsequent injury in elite Australian footballers. Design: Prospective cohort study. Methods: Internal (sessional rating of perceived exertion: sRPE) and external (distance, sprint distance) workload and lower limb non-contact muscle injury data was collected from 58 players over 5 seasons. Rehabilitation periods were analysed for running workloads and time spent in 3 rehabilitation stages (1: off-legs training, 2: non-football running, 3: group football training) was calculated. Multi-level survival analyses with random effects accounting for player and season were performed. Hazard ratios (HR) and 95% confidence intervals (CI) for each variable were produced for RTP time and time to subsequent injury. Results: Of 85 lower limb muscle injuries, 70 were rehabilitated to RTP, with 30 cases of subsequent injury recorded (recurrence rate = 11.8%, new site injury rate = 31.4%). Completion of high rehabilitation workloads delayed RTP (distance: >49,775 m [reference: 34,613–49,775 m]: HR 0.12, 95%CI 0.04–0.36, sRPE: >1266 AU [reference: 852–1266 AU]: HR 0.09, 95%CI 0.03–0.32). Return to running within 4 days increased subsequent injury risk (3–4 days [reference: 5–6 days]: HR 25.88, 95%CI 2.06–324.4). Attaining moderate-high sprint distance (427–710 m) was protective against subsequent injury (154–426 m: [reference: 427–710 m]: HR 37.41, 95%CI 2.70–518.64). Conclusions: Training load monitoring can inform player rehabilitation programs. Higher rehabilitation training loads delayed RTP; however, moderate-high sprint running loads can protect against subsequent injury. Shared-decision making regarding RTP should include accumulated training loads and consider the trade-off between expedited RTP and lower subsequent injury risk.
- Authors: Stares, Jordan , Dawson, Brian , Peeling, Peter , Drew, Michael , Heasman, Jarryd , Rogalski, Brent , Colby, Marcus
- Date: 2018
- Type: Text , Journal article
- Relation: Journal of Science and Medicine in Sport Vol. 21, no. 10 (2018), p. 1019-1024
- Full Text:
- Reviewed:
- Description: Objectives: Examine the influence of rehabilitation training loads on return to play (RTP) time and subsequent injury in elite Australian footballers. Design: Prospective cohort study. Methods: Internal (sessional rating of perceived exertion: sRPE) and external (distance, sprint distance) workload and lower limb non-contact muscle injury data was collected from 58 players over 5 seasons. Rehabilitation periods were analysed for running workloads and time spent in 3 rehabilitation stages (1: off-legs training, 2: non-football running, 3: group football training) was calculated. Multi-level survival analyses with random effects accounting for player and season were performed. Hazard ratios (HR) and 95% confidence intervals (CI) for each variable were produced for RTP time and time to subsequent injury. Results: Of 85 lower limb muscle injuries, 70 were rehabilitated to RTP, with 30 cases of subsequent injury recorded (recurrence rate = 11.8%, new site injury rate = 31.4%). Completion of high rehabilitation workloads delayed RTP (distance: >49,775 m [reference: 34,613–49,775 m]: HR 0.12, 95%CI 0.04–0.36, sRPE: >1266 AU [reference: 852–1266 AU]: HR 0.09, 95%CI 0.03–0.32). Return to running within 4 days increased subsequent injury risk (3–4 days [reference: 5–6 days]: HR 25.88, 95%CI 2.06–324.4). Attaining moderate-high sprint distance (427–710 m) was protective against subsequent injury (154–426 m: [reference: 427–710 m]: HR 37.41, 95%CI 2.70–518.64). Conclusions: Training load monitoring can inform player rehabilitation programs. Higher rehabilitation training loads delayed RTP; however, moderate-high sprint running loads can protect against subsequent injury. Shared-decision making regarding RTP should include accumulated training loads and consider the trade-off between expedited RTP and lower subsequent injury risk.
Implementation of concussion guidelines in community Australian Football and Rugby League - The experiences and challenges faced by coaches and sports trainers
- Kemp, Joanne, Newton, Joshua, White, Peta, Finch, Caroline
- Authors: Kemp, Joanne , Newton, Joshua , White, Peta , Finch, Caroline
- Date: 2015
- Type: Text , Journal article
- Relation: Journal of Science and Medicine in Sport Vol. 19, no. 4 (2015), p.305-310
- Relation: http://purl.org/au-research/grants/nhmrc/1058737
- Full Text:
- Reviewed:
- Description: Objectives: While guidelines outlining the appropriate management of sport-related concussion have been developed and adapted for use within community sport, it remains unknown how they are experienced by those responsible for implementing them.: Design: Longitudinal study.: Methods: 111 coaches and sports trainers from community-level Australian Football and Rugby League teams completed pre- and post-season surveys assessing their attitudes towards using concussion guidelines. Participants also provided post-season feedback regarding their experiences in using the guidelines.: Results: 71% of participants reported using the guidelines in the preceding season. Post-season attitude was related to pre-season attitude (p = 0.002), football code (p = 0.015), and team role (p = 0.045). An interaction between team role and guideline use (p = 0.012) was also found, with coaches who had used the guidelines, and sports trainers who had not, reporting more positive post-season attitudes towards using the concussion guidelines. Implementation challenges included disputing of decisions about return-to-play by players, parents, and coaches, and a perceived lack of time. Recommendations for improved guideline materials included using larger fonts and providing for witnessing of advice given to players.: Conclusions: This is the first study to examine the implementation of concussion guidelines in community sport. Training of coaches/sports trainers needs enhancement. In addition, new education should be developed for parents/players about the importance of the return-to-play advice given to them by those who follow these guidelines. Information provided by those who attempted to use the guidelines will assist the refinement of implementation and dissemination processes around concussion guidelines across sports. © 2015 Sports Medicine Australia
- Authors: Kemp, Joanne , Newton, Joshua , White, Peta , Finch, Caroline
- Date: 2015
- Type: Text , Journal article
- Relation: Journal of Science and Medicine in Sport Vol. 19, no. 4 (2015), p.305-310
- Relation: http://purl.org/au-research/grants/nhmrc/1058737
- Full Text:
- Reviewed:
- Description: Objectives: While guidelines outlining the appropriate management of sport-related concussion have been developed and adapted for use within community sport, it remains unknown how they are experienced by those responsible for implementing them.: Design: Longitudinal study.: Methods: 111 coaches and sports trainers from community-level Australian Football and Rugby League teams completed pre- and post-season surveys assessing their attitudes towards using concussion guidelines. Participants also provided post-season feedback regarding their experiences in using the guidelines.: Results: 71% of participants reported using the guidelines in the preceding season. Post-season attitude was related to pre-season attitude (p = 0.002), football code (p = 0.015), and team role (p = 0.045). An interaction between team role and guideline use (p = 0.012) was also found, with coaches who had used the guidelines, and sports trainers who had not, reporting more positive post-season attitudes towards using the concussion guidelines. Implementation challenges included disputing of decisions about return-to-play by players, parents, and coaches, and a perceived lack of time. Recommendations for improved guideline materials included using larger fonts and providing for witnessing of advice given to players.: Conclusions: This is the first study to examine the implementation of concussion guidelines in community sport. Training of coaches/sports trainers needs enhancement. In addition, new education should be developed for parents/players about the importance of the return-to-play advice given to them by those who follow these guidelines. Information provided by those who attempted to use the guidelines will assist the refinement of implementation and dissemination processes around concussion guidelines across sports. © 2015 Sports Medicine Australia
The efficacy of an iterative “sequence of prevention” approach to injury prevention by a multidisciplinary team in professional rugby union
- Tee, Jason, Bekker, Sheree, Collins, Rob, Klingbiel, Jannie, van Rooyen, Ivan, van Wyk, David, Till, Kevin, Jones, Ben
- Authors: Tee, Jason , Bekker, Sheree , Collins, Rob , Klingbiel, Jannie , van Rooyen, Ivan , van Wyk, David , Till, Kevin , Jones, Ben
- Date: 2018
- Type: Text , Journal article
- Relation: Journal of Science and Medicine in Sport Vol. 21, no. 9 (2018), p. 899-904
- Full Text:
- Reviewed:
- Description: Objectives: Due to the complex systems nature of injuries, the responsibility for injury risk management cannot lie solely within a single domain of professional practice. Interdisciplinary collaboration between technical/tactical coaches, strength and conditioning coaches, team doctors, physical therapists and sport scientists is likely to have a meaningful impact on injury risk. This study describes the application and efficacy of a multidisciplinary approach to reducing team injury risk in professional rugby union. Design: Observational longitudinal cohort study. Methods: Epidemiological injury data was collected from a professional rugby union team for 5 consecutive seasons. Following each season, these data informed multidisciplinary intervention strategies to reduce injury risk. The effectiveness of these strategies was iteratively assessed to inform future interventions. Specific examples of intervention strategies are provided. Results: Overall team injury burden displayed a likely beneficial decrease (−8%; injury rate ratio (IRR) 0.9, 95%CI 0.9–1.0) from 2012 to 2016. This was achieved through a most likely beneficial improvement in non-contact injury burden (−39%; IRR 0.6, 95%CI 0.6–0.7). Contact injury burden was increased, but to a lesser extent (+18%; IRR 1.2, 95%CI 1.1–1.3, most likely harmful) during the same period. Conclusions: The range of skills required to effectively manage complex injury phenomena in professional collision sport crosses disciplinary boundaries. The evidence presented here points to the effectiveness of a multidisciplinary approach to reducing injury risk. This model will likely be applicable across a range of team and individual sports.
- Authors: Tee, Jason , Bekker, Sheree , Collins, Rob , Klingbiel, Jannie , van Rooyen, Ivan , van Wyk, David , Till, Kevin , Jones, Ben
- Date: 2018
- Type: Text , Journal article
- Relation: Journal of Science and Medicine in Sport Vol. 21, no. 9 (2018), p. 899-904
- Full Text:
- Reviewed:
- Description: Objectives: Due to the complex systems nature of injuries, the responsibility for injury risk management cannot lie solely within a single domain of professional practice. Interdisciplinary collaboration between technical/tactical coaches, strength and conditioning coaches, team doctors, physical therapists and sport scientists is likely to have a meaningful impact on injury risk. This study describes the application and efficacy of a multidisciplinary approach to reducing team injury risk in professional rugby union. Design: Observational longitudinal cohort study. Methods: Epidemiological injury data was collected from a professional rugby union team for 5 consecutive seasons. Following each season, these data informed multidisciplinary intervention strategies to reduce injury risk. The effectiveness of these strategies was iteratively assessed to inform future interventions. Specific examples of intervention strategies are provided. Results: Overall team injury burden displayed a likely beneficial decrease (−8%; injury rate ratio (IRR) 0.9, 95%CI 0.9–1.0) from 2012 to 2016. This was achieved through a most likely beneficial improvement in non-contact injury burden (−39%; IRR 0.6, 95%CI 0.6–0.7). Contact injury burden was increased, but to a lesser extent (+18%; IRR 1.2, 95%CI 1.1–1.3, most likely harmful) during the same period. Conclusions: The range of skills required to effectively manage complex injury phenomena in professional collision sport crosses disciplinary boundaries. The evidence presented here points to the effectiveness of a multidisciplinary approach to reducing injury risk. This model will likely be applicable across a range of team and individual sports.
Multivariate modelling of subjective and objective monitoring data improve the detection of non-contact injury risk in elite Australian footballers
- Colby, Marcus, Dawson, Brian, Peeling, Peter, Heasman, Jarryd, Rogalski, Brent, Drew, Michael, Stares, Jordan, Zouhal, Hassane, Lester, Leanne
- Authors: Colby, Marcus , Dawson, Brian , Peeling, Peter , Heasman, Jarryd , Rogalski, Brent , Drew, Michael , Stares, Jordan , Zouhal, Hassane , Lester, Leanne
- Date: 2017
- Type: Text , Journal article
- Relation: Journal of Science and Medicine in Sport Vol. 20, no. 12 (2017), p. 1068-1074
- Full Text:
- Reviewed:
- Description: Objectives: To assess the association between workload, subjective wellness, musculoskeletal screening measures and non-contact injury risk in elite Australian footballers. Design: Prospective cohort study. Methods: Across 4 seasons in 70 players from one club, cumulative weekly workloads (acute; 1 week, chronic; 2-, 3-, 4-week) and acute:chronic workload ratio’s (ACWR: 1-week load/average 4-weekly load) for session-Rating of Perceived Exertion (sRPE) and GPS-derived distance and sprint distance were calculated. Wellness, screening and non-contact injury data were also documented. Univariate and multivariate regression models determined injury incidence rate ratios (IRR) while accounting for interaction/moderating effects. Receiver operating characteristics determined model predictive accuracy (area under curve: AUC). Results: Very low cumulative chronic (2-, 3-, 4- week) workloads were associated with the greatest injury risk (univariate IRR = 1.71–2.16, 95% CI = 1.10–4.52) in the subsequent week. In multivariate analysis, the interaction between a low chronic load and a very high distance (adj-IRR = 2.60, 95% CI = 1.07–6.34) or low sRPE ACWR (adj-IRR = 2.52, 95% CI = 1.01–6.29) was associated with increased injury risk. Subjectively reporting “yes” (vs. “no”) for old lower limb pain and heavy non-football activity in the previous 7 days (multivariate adj-IRR = 2.01–2.25, 95% CI = 1.02–4.95) and playing experience (>9 years) (multivariate adj- IRR = 2.05, 95% CI = 1.03–4.06) was also associated with increased injury risk, but screening data were not. Predictive capacity of multivariate models was significantly better than univariate (AUCmultivariate = 0.70, 95% CI 0.64–0.75; AUCunivariate range = 0.51–0.60). Conclusions: Chronic load is an important moderating factor in the workload–injury relationship. Low chronic loads coupled with low or very high ACWR are associated with increased injury risk.
- Description: Objectives: To assess the association between workload, subjective
- Authors: Colby, Marcus , Dawson, Brian , Peeling, Peter , Heasman, Jarryd , Rogalski, Brent , Drew, Michael , Stares, Jordan , Zouhal, Hassane , Lester, Leanne
- Date: 2017
- Type: Text , Journal article
- Relation: Journal of Science and Medicine in Sport Vol. 20, no. 12 (2017), p. 1068-1074
- Full Text:
- Reviewed:
- Description: Objectives: To assess the association between workload, subjective wellness, musculoskeletal screening measures and non-contact injury risk in elite Australian footballers. Design: Prospective cohort study. Methods: Across 4 seasons in 70 players from one club, cumulative weekly workloads (acute; 1 week, chronic; 2-, 3-, 4-week) and acute:chronic workload ratio’s (ACWR: 1-week load/average 4-weekly load) for session-Rating of Perceived Exertion (sRPE) and GPS-derived distance and sprint distance were calculated. Wellness, screening and non-contact injury data were also documented. Univariate and multivariate regression models determined injury incidence rate ratios (IRR) while accounting for interaction/moderating effects. Receiver operating characteristics determined model predictive accuracy (area under curve: AUC). Results: Very low cumulative chronic (2-, 3-, 4- week) workloads were associated with the greatest injury risk (univariate IRR = 1.71–2.16, 95% CI = 1.10–4.52) in the subsequent week. In multivariate analysis, the interaction between a low chronic load and a very high distance (adj-IRR = 2.60, 95% CI = 1.07–6.34) or low sRPE ACWR (adj-IRR = 2.52, 95% CI = 1.01–6.29) was associated with increased injury risk. Subjectively reporting “yes” (vs. “no”) for old lower limb pain and heavy non-football activity in the previous 7 days (multivariate adj-IRR = 2.01–2.25, 95% CI = 1.02–4.95) and playing experience (>9 years) (multivariate adj- IRR = 2.05, 95% CI = 1.03–4.06) was also associated with increased injury risk, but screening data were not. Predictive capacity of multivariate models was significantly better than univariate (AUCmultivariate = 0.70, 95% CI 0.64–0.75; AUCunivariate range = 0.51–0.60). Conclusions: Chronic load is an important moderating factor in the workload–injury relationship. Low chronic loads coupled with low or very high ACWR are associated with increased injury risk.
- Description: Objectives: To assess the association between workload, subjective
Seasonal time-loss match injury rates and burden in South African under-16 rugby teams
- Sewry, Nicola, Verhagen, Evert, Lambert, Mike, van Mechelen, Willem, Readhead, Clint, Viljoen, Wayne, Brown, James
- Authors: Sewry, Nicola , Verhagen, Evert , Lambert, Mike , van Mechelen, Willem , Readhead, Clint , Viljoen, Wayne , Brown, James
- Date: 2019
- Type: Text , Journal article
- Relation: Journal of Science and Medicine in Sport Vol. 22, no. 1 (2019), p. 54-58
- Full Text:
- Reviewed:
- Description: Objectives: Youth rugby union is a popular sport with a high injury incidence density (IID) and burden. This high risk has called for further research into the factors affecting the injuries in youth rugby. The aim of the study was to analyse time-loss IID and burden in multiple schoolboy rugby teams over a season and the potential factors associated with injury. Design: Prospective cohort Methods: All time-loss injuries were recorded from three schools for the whole season. Overall IID and injury burden were calculated, as well as for injury event, type, location and the match quarter in which they occurred and Poisson regression analyses were performed to determine differences. Results: IID was 28.8 (18.9–38.6) injuries per 1000 player hours over the season, with an injury burden of 379.2 (343.6–414.9) days lost per 1000 player hours. The ball-carrier had a significantly higher IID (11.3 (5.2–17.5) per 1000 player hours) compared to other events, and the joint (non-bone)/ligament injuries were the most common (IID of 12.2 (5.8–18.6) per 1000 player hours) and severe type of injury (burden of 172.6 (148.5–196.6) days lost per 1000 player hours). Conclusions: The IID was similar to previous youth rugby studies, however the injury burden was much lower. The South African youth cohort showed similar factors associated with injury for inciting event (the tackle) and injury type (joint (non-bone)/ligament) and location (lower limb) as seen in other studies in both youth and senior players.
- Authors: Sewry, Nicola , Verhagen, Evert , Lambert, Mike , van Mechelen, Willem , Readhead, Clint , Viljoen, Wayne , Brown, James
- Date: 2019
- Type: Text , Journal article
- Relation: Journal of Science and Medicine in Sport Vol. 22, no. 1 (2019), p. 54-58
- Full Text:
- Reviewed:
- Description: Objectives: Youth rugby union is a popular sport with a high injury incidence density (IID) and burden. This high risk has called for further research into the factors affecting the injuries in youth rugby. The aim of the study was to analyse time-loss IID and burden in multiple schoolboy rugby teams over a season and the potential factors associated with injury. Design: Prospective cohort Methods: All time-loss injuries were recorded from three schools for the whole season. Overall IID and injury burden were calculated, as well as for injury event, type, location and the match quarter in which they occurred and Poisson regression analyses were performed to determine differences. Results: IID was 28.8 (18.9–38.6) injuries per 1000 player hours over the season, with an injury burden of 379.2 (343.6–414.9) days lost per 1000 player hours. The ball-carrier had a significantly higher IID (11.3 (5.2–17.5) per 1000 player hours) compared to other events, and the joint (non-bone)/ligament injuries were the most common (IID of 12.2 (5.8–18.6) per 1000 player hours) and severe type of injury (burden of 172.6 (148.5–196.6) days lost per 1000 player hours). Conclusions: The IID was similar to previous youth rugby studies, however the injury burden was much lower. The South African youth cohort showed similar factors associated with injury for inciting event (the tackle) and injury type (joint (non-bone)/ligament) and location (lower limb) as seen in other studies in both youth and senior players.
The impact of strength level on adaptations to combined weightlifting, plyometric, and ballistic training
- James, Lachlan, Haff, Gregory, Vincent, Kelly, Connick, Mark, Hoffman, Ben, Beckman, Emma
- Authors: James, Lachlan , Haff, Gregory , Vincent, Kelly , Connick, Mark , Hoffman, Ben , Beckman, Emma
- Date: 2018
- Type: Text , Journal article
- Relation: Scandinavian Journal of Medicine & Science in Sports Vol. 28, no. 5 (2018), p. 1494-1505
- Full Text:
- Reviewed:
- Description: The purpose of this investigation was to determine whether the magnitude of adaptation to integrated ballistic training is influenced by initial strength level. Such information is needed to inform resistance training guidelines for both higher-and lower-level athlete populations. To this end, two groups of distinctly different strength levels (stronger: one-repetition-maximum (1RM) squat = 2.01 ± 0.15 kg·BM
- Authors: James, Lachlan , Haff, Gregory , Vincent, Kelly , Connick, Mark , Hoffman, Ben , Beckman, Emma
- Date: 2018
- Type: Text , Journal article
- Relation: Scandinavian Journal of Medicine & Science in Sports Vol. 28, no. 5 (2018), p. 1494-1505
- Full Text:
- Reviewed:
- Description: The purpose of this investigation was to determine whether the magnitude of adaptation to integrated ballistic training is influenced by initial strength level. Such information is needed to inform resistance training guidelines for both higher-and lower-level athlete populations. To this end, two groups of distinctly different strength levels (stronger: one-repetition-maximum (1RM) squat = 2.01 ± 0.15 kg·BM
Association between preseason training and performance in elite Australian football
- McCaskie, Callum, Young, Warren, Fahrner, Brendan, Sim, Marc
- Authors: McCaskie, Callum , Young, Warren , Fahrner, Brendan , Sim, Marc
- Date: 2019
- Type: Text , Journal article
- Relation: International Journal of Sports Physiology and Performance Vol. 14, no. 1 (2019), p. 68-75
- Full Text:
- Reviewed:
- Description: Purpose: To examine the association between preseason training variables and subsequent in-season performance in an elite Australian football team. Methods: Data from 41 elite male Australian footballers (mean [SD] age = 23.4 [3.1] y, height =188.4 [7.1] cm, and mass = 86.7 [7.9] kg) were collected from 1 Australian Football League (AFL) club. Preseason training data (external load, internal load, fitness testing, and session participation) were collected across the 17-wk preseason phase (6 and 11 wk post-Christmas). Champion Data© Player Rank (CDPR), coaches’ ratings, and round 1 selection were used as in-season performance measures. CDPR and coaches’ ratings were examined over the entire season, first half of the season, and the first 4 games. Both Pearson and partial (controlling for AFL age) correlations were calculated to assess if any associations existed between preseason training variables and in-season performance measures. A median split was also employed to differentiate between higher- and lower-performing players for each performance measure. Results: Preseason training activities appeared to have almost no association with performance measured across the entire season and the first half of the season. However, many preseason training variables were significantly linked with performance measured across the first 4 games. Preseason training variables that were measured post-Christmas were the most strongly associated with in-season performance measures. Specifically, total on-field session rating of perceived exertion post-Christmas, a measurement of internal load, displayed the greatest association with performance. Conclusion: Late preseason training (especially on-field match-specific training) is associated with better performance in the early season.
- Authors: McCaskie, Callum , Young, Warren , Fahrner, Brendan , Sim, Marc
- Date: 2019
- Type: Text , Journal article
- Relation: International Journal of Sports Physiology and Performance Vol. 14, no. 1 (2019), p. 68-75
- Full Text:
- Reviewed:
- Description: Purpose: To examine the association between preseason training variables and subsequent in-season performance in an elite Australian football team. Methods: Data from 41 elite male Australian footballers (mean [SD] age = 23.4 [3.1] y, height =188.4 [7.1] cm, and mass = 86.7 [7.9] kg) were collected from 1 Australian Football League (AFL) club. Preseason training data (external load, internal load, fitness testing, and session participation) were collected across the 17-wk preseason phase (6 and 11 wk post-Christmas). Champion Data© Player Rank (CDPR), coaches’ ratings, and round 1 selection were used as in-season performance measures. CDPR and coaches’ ratings were examined over the entire season, first half of the season, and the first 4 games. Both Pearson and partial (controlling for AFL age) correlations were calculated to assess if any associations existed between preseason training variables and in-season performance measures. A median split was also employed to differentiate between higher- and lower-performing players for each performance measure. Results: Preseason training activities appeared to have almost no association with performance measured across the entire season and the first half of the season. However, many preseason training variables were significantly linked with performance measured across the first 4 games. Preseason training variables that were measured post-Christmas were the most strongly associated with in-season performance measures. Specifically, total on-field session rating of perceived exertion post-Christmas, a measurement of internal load, displayed the greatest association with performance. Conclusion: Late preseason training (especially on-field match-specific training) is associated with better performance in the early season.
Do neurocognitive SCAT3 baseline test scores differ between footballers (soccer) living with and without diability? A cross-sectional study
- Weiler, Richard, van Mechelen, Willem, Fuller, Colin, Ahmed, Osman, Verhagen, Evert
- Authors: Weiler, Richard , van Mechelen, Willem , Fuller, Colin , Ahmed, Osman , Verhagen, Evert
- Date: 2018
- Type: Text , Journal article
- Relation: Clinical Journal of Sport Medicine Vol. 28, no. 1 (2018), p. 43-50
- Full Text:
- Reviewed:
- Description: OBJECTIVE:: To determine if baseline Sport Concussion Assessment Tool, third Edition (SCAT3) scores differ between athletes with and without disability. DESIGN:: Cross-sectional comparison of preseason baseline SCAT3 scores for a range of England international footballers. SETTING:: Team doctors and physiotherapists supporting England football teams recorded playersʼ SCAT 3 baseline tests from August 1, 2013 to July 31, 2014. PARTICIPANTS:: A convenience sample of 249 England footballers, of whom 185 were players without disability (male: 119; female: 66) and 64 were players with disability (male learning disability: 17; male cerebral palsy: 28; male blind: 10; female deaf: 9). ASSESSMENT AND OUTCOME MEASURES:: Between-group comparisons of median SCAT3 total and section scores were made using nonparametric Mann–Whitney–Wilcoxon ranked-sum test. MAIN RESULTS:: All footballers with disability scored higher symptom severity scores compared with male players without disability. Male footballers with learning disability demonstrated no significant difference in the total number of symptoms, but recorded significantly lower scores on immediate memory and delayed recall compared with male players without disability. Male blind footballersʼ scored significantly higher for total concentration and delayed recall, and male footballers with cerebral palsy scored significantly higher on balance testing and immediate memory, when compared with male players without disability. Female footballers with deafness scored significantly higher for total concentration and balance testing than female footballers without disability. CONCLUSIONS:: This study suggests that significant differences exist between SCAT3 baseline section scores for footballers with and without disability. Concussion consensus guidelines should recognize these differences and produce guidelines that are specific for the growing number of athletes living with disability.
- Authors: Weiler, Richard , van Mechelen, Willem , Fuller, Colin , Ahmed, Osman , Verhagen, Evert
- Date: 2018
- Type: Text , Journal article
- Relation: Clinical Journal of Sport Medicine Vol. 28, no. 1 (2018), p. 43-50
- Full Text:
- Reviewed:
- Description: OBJECTIVE:: To determine if baseline Sport Concussion Assessment Tool, third Edition (SCAT3) scores differ between athletes with and without disability. DESIGN:: Cross-sectional comparison of preseason baseline SCAT3 scores for a range of England international footballers. SETTING:: Team doctors and physiotherapists supporting England football teams recorded playersʼ SCAT 3 baseline tests from August 1, 2013 to July 31, 2014. PARTICIPANTS:: A convenience sample of 249 England footballers, of whom 185 were players without disability (male: 119; female: 66) and 64 were players with disability (male learning disability: 17; male cerebral palsy: 28; male blind: 10; female deaf: 9). ASSESSMENT AND OUTCOME MEASURES:: Between-group comparisons of median SCAT3 total and section scores were made using nonparametric Mann–Whitney–Wilcoxon ranked-sum test. MAIN RESULTS:: All footballers with disability scored higher symptom severity scores compared with male players without disability. Male footballers with learning disability demonstrated no significant difference in the total number of symptoms, but recorded significantly lower scores on immediate memory and delayed recall compared with male players without disability. Male blind footballersʼ scored significantly higher for total concentration and delayed recall, and male footballers with cerebral palsy scored significantly higher on balance testing and immediate memory, when compared with male players without disability. Female footballers with deafness scored significantly higher for total concentration and balance testing than female footballers without disability. CONCLUSIONS:: This study suggests that significant differences exist between SCAT3 baseline section scores for footballers with and without disability. Concussion consensus guidelines should recognize these differences and produce guidelines that are specific for the growing number of athletes living with disability.
Abrasion injuries on artificial turf : A systematic review
- Twomey, Dara, Petrass, Lauren, Fleming, Paul, Lenehan, Kurt
- Authors: Twomey, Dara , Petrass, Lauren , Fleming, Paul , Lenehan, Kurt
- Date: 2019
- Type: Text , Journal article , Review
- Relation: Journal of Science and Medicine in Sport Vol. 22, no. 5 (2019), p. 550-556
- Full Text:
- Reviewed:
- Description: Objectives: To review the incidence of abrasion injuries sustained on artificial turf playing fields and the level of evidence existing on player perceptions of abrasion injuries on these surfaces. Design: Systematic review. Method: A systematic search was performed using SPORTDiscus, Medline, Web of Science, Scopus and Science Direct databases. Inclusion criteria included: abrasion type injuries measured; conducted on artificial/synthetic turf; type of sport reported; peer-reviewed original research; English language search terms, but no language restrictions. A quality assessment was conducted using the Newcastle-Ottawa quality scale. Results: The search yielded 76 potential articles, with 25 meeting all inclusion criteria. Twenty articles were injury-based and five were perception–based. The differences in injury definition and the lack of details of the playing surfaces produced varying results on the rate of injuries on artificial turf. Regardless of the condition of the surface, the level of play, or the sport, players perceived the fear of abrasion injuries as a major disadvantage of artificial turf surfaces. Conclusions: The review highlighted the current disparity that exists between players’ perceptions of abrasion injuries and the level of evidence of abrasion injury risk on artificial turf playing surfaces. There is a need for the inclusion of greater detail of playing surfaces’ specifications and condition, and an injury definition sufficiently sensitive to better measure abrasion injury incidence and severity. Without this more detailed information, it is likely that the strongly perceived risk of abrasion injuries will continue as a barrier to the adoption of artificial playing surfaces.
- Authors: Twomey, Dara , Petrass, Lauren , Fleming, Paul , Lenehan, Kurt
- Date: 2019
- Type: Text , Journal article , Review
- Relation: Journal of Science and Medicine in Sport Vol. 22, no. 5 (2019), p. 550-556
- Full Text:
- Reviewed:
- Description: Objectives: To review the incidence of abrasion injuries sustained on artificial turf playing fields and the level of evidence existing on player perceptions of abrasion injuries on these surfaces. Design: Systematic review. Method: A systematic search was performed using SPORTDiscus, Medline, Web of Science, Scopus and Science Direct databases. Inclusion criteria included: abrasion type injuries measured; conducted on artificial/synthetic turf; type of sport reported; peer-reviewed original research; English language search terms, but no language restrictions. A quality assessment was conducted using the Newcastle-Ottawa quality scale. Results: The search yielded 76 potential articles, with 25 meeting all inclusion criteria. Twenty articles were injury-based and five were perception–based. The differences in injury definition and the lack of details of the playing surfaces produced varying results on the rate of injuries on artificial turf. Regardless of the condition of the surface, the level of play, or the sport, players perceived the fear of abrasion injuries as a major disadvantage of artificial turf surfaces. Conclusions: The review highlighted the current disparity that exists between players’ perceptions of abrasion injuries and the level of evidence of abrasion injury risk on artificial turf playing surfaces. There is a need for the inclusion of greater detail of playing surfaces’ specifications and condition, and an injury definition sufficiently sensitive to better measure abrasion injury incidence and severity. Without this more detailed information, it is likely that the strongly perceived risk of abrasion injuries will continue as a barrier to the adoption of artificial playing surfaces.
- «
- ‹
- 1
- ›
- »