The EU AI Office has released a draft General-Purpose AI Code of Practice. While not legally binding, it is expected to be influential in shaping the AI landscape in the EU. Boards of General-Purpose AI providers should be aware of the implications for their organizations.
The Code of Practice covers transparency, copyright rules, taxonomy of systemic risks, safety and security frameworks, risk assessment, technical risk mitigation, and governance risk mitigation. Importantly, the AI Act applies to AI models provided for free — including open source models — as well as those that are sold.
Sub-Measure 15.2 explicitly calls on boards to establish oversight of systemic risks from general-purpose AI models, including through the creation of dedicated risk committees. The Code establishes the need for board-level responsibility for allocating adequate resources for overseeing systemic risks, including ensuring executives have sufficient budgets and the right expertise.
Organizations are required to document their adherence to the Code and all applicable provisions of the AI Act. This includes technical documentation of AI models, criteria for classification, security and safety framework documentation, and evidence collected during risk assessments.
Risk assessment is a key focus, with four Measures relating specifically to how providers should assess systemic risks — continuously, from before training through post-deployment. The Code also requires documented incident response plans and establishes whistleblower protections under the EU Whistleblower Directive.
Boards should keep an eye on the Code's development. While it may change following the public comment period, many of the Practices align with broader best practices for AI governance. If your board does not have sufficient expertise to address AI governance, consider bringing in an additional board member or engaging in board-level training specific to AI risks.