The UK government has announced a new AI Code of Practice, which it claims will lay the foundation for a global standard in securing artificial intelligence through the European Telecommunications Standards Institute (ETSI).
Released on Friday, 31st January, the voluntary AI Code of Practice was developed in collaboration with the National Cyber Security Centre (NCSC) and key industry stakeholders. It is accompanied by implementation guidance to help organizations adopt its security principles effectively.
The 13 principles outlined in the code focus on securing AI systems throughout their lifecycle, covering design, development, deployment, maintenance, and end-of-life management.
The guidelines apply to software vendors that develop AI, integrate third-party AI, or offer AI solutions to customers, as well as organizations that develop or use AI services internally.
However, the UK government clarified that AI vendors that sell models and components—but do not directly develop or deploy them—will not fall under this code. Instead, they will be governed by a separate Software Code of Practice and Cyber Governance Code.
The 13 Principles of the AI Code of Practice
- Raise awareness of AI security threats through staff training.
- Design AI systems with security, functionality, and performance in mind.
- Assess risks by modeling threats and implementing mitigation strategies.
- Ensure human accountability for AI decision-making.
- Identify and protect assets, including system dependencies and connectivity.
- Secure AI infrastructure, covering APIs, models, data, and training pipelines.
- Strengthen the software supply chain to prevent security breaches.
- Maintain documentation with an audit trail for system design and updates.
- Conduct rigorous testing and evaluation before deployment.
- Implement secure deployment practices, ensuring data protection and user security guidelines.
- Regularly update AI systems with patches and security mitigations.
- Monitor system behavior, keeping logs for security compliance and incident response.
- Ensure proper disposal of AI models and data at the end of their lifecycle.
With AI adoption growing rapidly, the UK government aims for this code of practice to serve as a benchmark for AI security worldwide.
However, since it remains voluntary, its effectiveness will largely depend on industry adoption and future regulatory developments.
Source: Infosecurity Magazine