Artificial intelligence technology matured significantly in 2023, resulting in a flood of laws and standards in an attempt to regulate it. In Canada, September saw the Minister of Innovation, Science and Industry release a “Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems”. This code outlines measures that Canadian businesses are being urged to implement as they await the enactment of the AIDA (Artificial Intelligence and Data Act), which the federal government is aiming to have in force by 2025. Then on October 30, 2023, U.S. President Joe Biden issued the “Executive Order on Safe, Secure and Trustworthy AI Development and Use of Artificial Intelligence”, recognizing the benefits of the U.S. government’s use of AI while detailing core principles, objectives and requirements to mitigate risks. On December 8, 2023, the EU reached political agreement on the AI Act, its comprehensive framework for the regulation of AI. The act scales requirements based on the risk-level of the underlying AI system.
This article covers the Canadian Code. As generative AI is increasingly used in business practices, the Canadian guidelines provide organizations with key considerations for navigating AI in an ethical and responsible manner.
The AIDA aims to promote the responsible integration of generative AI that protects Canadians. The new code is the start of government intervention that attempts to direct AI innovation and management in a constructive manner. The Canadian government will ideally build on this framework, taking into consideration the consultations of various stakeholders across Canada, as well as support a transparent regulatory development processes. Input on the code will ensure that the AIDA regulations and their subsequent outcomes are both aligned with Canadian values and international standards. The Government of Canada intends for Canadian organizations to be recognized at the forefront on a global scale for meeting robust artificial intelligence best practices.
As widespread generative AI use may introduce harmful risks (the promotion of negative biases, the deception of the democratic and criminal justice system through media fabrication, privacy violations), following the measures outlined in the voluntary code will give Canadian businesses the opportunity to address and mitigate such risks.
This voluntary code on the responsible development of AI systems has six major areas of focus: 1) accountability, 2) safety, 3) fairness and equity, 4) transparency, 5) human oversight and monitoring, and 6) validity and robustness. Below is a summary of each principle’s main intention and requirements (more information can be found on the Government of Canada website):
1. Accountability
The accountability principle ensures that firms understand their position regarding the AI systems they manage or develop, and that they share necessary information with other firms and enact proper risk management systems. This principle requires an organization to:
-
-
- Establish a thorough risk management framework tailored to the nature and risk profile of activities;
- Develop policies, procedures, and training to familiarize staff with their responsibilities and the organization’s risk management practices;
- Collaborate with firms performing complementary roles in the ecosystem to exchange information and share best practices in risk management; and
- Use several lines of defense, including the execution of third-party audits before the release of AI systems.
-
2. safety
The safety principle ensures that systems undergo risk assessments, and that safe operation is secured through pre-deployment mitigation of risk. This principle requires an organization to:
-
-
- Conduct a thorough evaluation of reasonably foreseeable potential adverse impacts, including risks linked to the improper or malicious use of the AI system;
- Provide downstream developers and managers with guidance on the proper usage of the system, including details on measures taken to address potential risks; and
- Deploy appropriate measures to mitigate the risk of harm, including the establishment of safeguards against malicious use.
-
3. Fairness and equity
The fairness and equity principle ensures that organizations assess potential fairness and equity impacts in AI systems throughout development and deployment. This principle requires an organization to:
-
-
- Implement diverse testing methods and measures to evaluate and address the risk of biased output before the release; and
- Evaluate and curate datasets employed for training to oversee data quality and mitigate potential biases.
-
4. transparency
The transparency principle ensures that experts assess whether risks have been properly addressed and that customers make informed decisions. This principle requires an organization to:
-
-
- Publish details about the limitations and capabilities of the AI system;
- Create and deploy a dependable and openly accessible technique for identifying content generated by the AI system;
- Clearly and prominently label AI systems that could be misconstrued as human-operated; and
- Provide a comprehensive account of the training data types utilized in the system.
-
5. human oversight and monitoring
The human oversight and monitoring principle ensures that the use of AI systems are monitored post-deployment and are constantly updated as risks materialize. This principle requires an organization to:
-
-
- Supervise the system’s functioning post-release through third-party feedback channels to detect any harmful uses or impacts;
- Notify the developer and/or implement usage controls as necessary to mitigate potential harm; and
- Keep a record of reported incidents following deployment and provide timely updates as required to maintain effective mitigation measures.
-
6. validity and robustness
The validity and robustness principle ensures that AI systems are secure against cyber-attacks, that their behavior in possible tasks and situations is understood, and that they operate as intended. This principle requires an organization to:
-
-
- Utilize an extensive range of testing methods across a variety of tasks and contexts before deployment to evaluate performance and guarantee resilience;
- Assess cyber-security risks and enact appropriate measures to mitigate said risks (especially concerning data poisoning);
- Conduct adversarial testing (red-teaming) to pinpoint vulnerabilities; and
- Conduct benchmarking to evaluate the model’s performance against established standards.
-
It is increasingly important to make sure your service providers, if they are using generative AI, conform to this code and other risk-based AI standards and laws globally. Not only is it best practice, but this will make the shift to adhering to the legally binding regulations of the AIDA far smoother.
At PRIVATECH, we offer a recently updated Supplier Privacy Risk Management Toolkit containing detailed privacy and data security questionnaires that include assessing generative AI used by your suppliers, a detailed template data protection addendum for your service contracts, and other key resources. The questionnaire makes it easy for organizations to evaluate their service providers against the voluntary code of conduct for generative AI discussed in this article. To find out more about the toolkit, CLICK HERE.