Risk assessment and mitigation
Guidance on best practices for AI risk assessment and mitigation in sport.
To ensure AI technologies are used safely, fairly, and in line with participant and community expectations, clear risk assessments should be conducted before initial use and regularly reviewed as part of ongoing governance. 1
These assessments don’t need to be complicated. Key questions for sports organisations to consider include: 1
Purpose and scope
- What is the AI tool meant to do?
- Is it solving a real problem, or just adding complexity?
Users and impact
- Who will use the system, and who will be affected by its outputs (for example athletes, coaches, parents, administrators)?
Data inputs
- What data does the system rely on? Is it personal, biometric, or sensitive?
- Are the data collection and storage compliant with the Privacy Act 1988 and the Australian Privacy Principles?
Human oversight
- Is there always a person who can check, question, or override the AI system’s outputs?
Vulnerability of participants
- Are children, young athletes, or other groups in power-imbalanced situations affected? If so, extra care and safeguards are required.
Risk severity and likelihood
- What could go wrong if the AI system fails or produces errors?
- Could it affect safety, wellbeing, or fairness?
Mitigation and safeguards
- What protections are in place, such as opt-in consent, secure storage, regular audits, or a clear appeals process if someone feels an AI-driven decision is unfair?
Training and education
- Are participants and staff provided with guidance on privacy, consent, and recognising misuse (for example over-monitoring)?
- Are clear resources such as handouts, FAQs, or website updates available to explain how the data and technology are used and why they are beneficial?
























