policy monitor

Canada – Code of Practice regarding generative AI

The Canadian government sees great opportunities for innovation in generative AI-systems because of their ability to generate new content in many different forms and contexts. However, the Canadian government also voices its concerns as the systems can be used for malicious or inappropriate purposes. To address these concerns, the Canadian government established a code of practice regarding generative AI, in anticipation of receiving royal assent for the Canadian Artificial Intelligence and Data Act (AIDA). The code of practice may be implemented voluntarily by Canadian firms, enabling a smooth transition to compliance with the upcoming AIDA.

What: Policy-orienting document; code of practice

Impactscore: 3

For who: Companies, policy makers

URL: https://ised-isde.canada.ca/site/ised/en/voluntary-code-conduct-responsible-development-and-management-advanced-generative-ai-systems

Summary

Code of practice - elements

The Canadian Government has put together various key elements of a code of practice, based on inputs received during discussions with a broad cross-section of stakeholders. Most of the following measures may be applied in both all advanced generative systems as well as advanced generative systems available for public use, and both by managers and developers of the systems. Developers are those who decide on the methodology selection, collect and process datasets, or build and test the model. Managers are those persons who put the system into operation, control the operation parameters, or access and monitor the system’s operation. Commitment to the code is voluntary for both managers and developers.

Safety

Developers and managers of generative AI systems would:

  • perform a thorough evaluation of reasonably foreseeable potential harmful impacts, including risks regarding to malicious or inappropriate use of the system (e.g., impersonating real individuals by using the system or use of a large language model to generate legal advice) and take action to prevent this from occurring.

Developers of generative AI systems would:

  • take suitable measures to reduce the risks of harm. For example, by creating safeguards against malicious use.
  • provide guidance on appropriate use of the system to downstream developers and managers. This should include information on measures taken to address risks.

Fairness and Equity

Developers of generative AI systems would:

  • evaluate and curate datasets with appropriate and representative data to avoid low-quality data and biased datasets.
  • implement diverse testing methods to evaluate and limit the risk of biased output prior to release (e.g., fine-tuning).

Transparency

Developers of advanced generative systems available for public use would:

  • publish information on capabilities and limitations of the system.
  • develop a free and reliable method for detecting content generated by the AI system, with a near-term focus on audiovisual content (e.g., watermarking).
  • publish a description of the types of training data used to develop the system, including steps taken to identify and address risks.

Managers of generative AI systems would:

  • ensure that systems are identified as AI systems if they could be mistaken for humans.

Human Oversight and Monitoring

Managers of generative AI systems would:

  • monitor the operation of the system, including through the use of third-party feedback channels, to identify and report harmful uses or impacts after the system is made available and inform the developer and/or establish usage controls as needed to reduce harm.

Developers of generative AI systems would:

  • maintain an incident database and provide updates as needed to ensure effective mitigation measures.

Validity and Robustness

Developers of generative AI systems would:

  • make use of a wide range of testing methods across a spectrum of tasks and contexts to measure performance and ensure robustness.
  • use adversarial testing (i.e., red-teaming) to identify vulnerabilities.
  • conduct a cybersecurity risk assessment and implement appropriate risk mitigation measures, including with regard to data poisoning. This also applies to managers of advanced generative systems available for public use.
  • perform benchmarking to measure the performance of the model against recognized standards.

Accountability

Developers and managers of generative systems would:

  • establish a comprehensive risk management framework appropriate to the nature and risk profile of the activities. Including policies, procedures, and training which need to be developed to make employees across the AI value chain familiar with their duties in this process and the organization’s risk management practices.
  • share information and best practices on risk management with firms playing complementary roles in the ecosystem.

Developers of advanced generative systems available for public use would:

  • use multiple lines of defence, this includes conducting third-party audits prior to release.

Additional commitments

In addition to committing to these outcomes, developers and managers who undersign the code of conduct also commit to support the development of a robust and responsible AI ecosystem in Canada as well as to develop and deploy systems in a way that will drive inclusive and sustainable growth in Canada, including with a prioritisation of human rights, accessibility and environmental sustainability.