The State of AI Regulation in Canada (2024)
August 2024 - 6 min read
Canada's AI regulation is still in development, with the Artificial Intelligence and Data Act under review in Parliament. In the meantime, voluntary guidelines are in place, and provincial and international efforts are progressing. It's important for businesses to stay informed and prepared for future regulatory changes.
|
Where are we at with the regulation of artificial intelligence technologies in Canada? First, an admission: when we wrote that the federal government's proposed Artificial Intelligence and Data Act (AIDA) was "Coming Soon to AI Near You" in August 2022, well, we admit that was far too optimistic. It's been over two years, and the government's Bill C-27 is still stuck in committee meetings on Parliament Hill.
AIDA
Back in June 2022, the intent behind AIDA was to regulate international and interprovincial trade and commerce in artificial intelligence systems under a harms-based framework.
AIDA was intended to create an enforcement mechanism, including substantial fines to deter reckless or malicious uses of AI. It would also establish a new position – the "AI and Data Commissioner" – to oversee compliance.
Since then, the proposed law has been debated, criticized, and refined, but remains more of a conceptual framework, which is as likely to die on the order paper as it is to mature into binding legislation.
By contrast of course, artificial intelligence (AI) is not stuck in committee and has been galloping ahead, more or less unbridled.
Non-Binding Guidelines
To fill the gaps in this landscape, in 2023, the federal government issued a Guide on the use of generative artificial intelligence for the use of AI tools in the public sector workplace. These are merely guidelines and best practices, without the force of law.
For the private sector, in 2023 the Canadian government threw its support behind a Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems (the "Voluntary Code").
The Voluntary Code is developed by industry, and, like the public sector guide, this code does not have the weight of any regulatory enforcement. It's been framed as a set of interim measures in advance of the binding regulation that would come with AIDA. (That's assuming AIDA makes its way through Parliament and becomes law.)
At the time of writing, about thirty Canadian companies have signed on to the Voluntary Code, which commits both developers and managers of advanced generative systems to the following six principles:
- Accountability
- Safety
- Fairness and Equity
- Transparency
- Human Oversight and Monitoring
- Validity and Robustness
The six principles in the Canadian Voluntary Code show the influence of the US version, which is also a set of voluntary commitments developed by industry in cooperation with the Biden administration in 2023.
From the outside, the approach of this Voluntary Code has echoes of the Canadian government's approach to private sector privacy regulation 25 years ago, when principles drafted by industry were adopted clumsily into legislation, spawning the Personal Information Protection and Electronic Documents Act (PIPEDA).
Provincial Regulation
At the same time, there are a number of attempts at the provincial level to regulate the use of AI systems. While this is not an exhaustive list, these provincial efforts include the following:
- Ontario: The Ontario government introduced Bill 194, called the Strengthening Cyber Security and Building Trust in the Public Sector Act, 2024. The Bill, if passed, would enact the Enhancing Digital Security and Trust Act, 2024 (if you need more acronyms in AI regulation, that is the "EDSTA"). This piece of provincial legislation would allow the government to regulate how public sector entities (to be identified by regulation), use artificial intelligence systems.
- Quebec: The Province of Québec recently retooled its private-sector privacy statute, the Act respecting the protection of personal information in the private sector. It now touches on AI regulation by controlling automated decisions that are made about individuals without independent human judgment – for example, algorithm to sort employment applications, which may have the unintended effect of making decisions that are individually or systematically unfair or unjust.
- In a recent decision by the Ontario Information and Privacy Commissioner (McMaster University (Re)), the Commissioner reviewed a complaint relating to personal information handling practices of a Canadian University in the course of online proctoring. Given the broad range of privacy and human rights and impacts at stake with the use of AI, the Commissioner went so far as to recommend that the University undertake an Algorithmic Impact Assessment (AIA) in addition to its Privacy Impact Assessment (PIA). An AIA is a risk assessment tool that contains a series of questions and prompts to help organizations assess the potential impacts of Automated Decision-Making (ADM).
While the Commissioner's recommendations do not have the force of law, they are instructive on the potential direction of future regulatory responses.
International Regulation
Again, this is not an exhaustive list, but the following illustrates the direction of efforts outside Canada:
- USA: The White House Office of Science and Technology Policy published a Blueprint for an AI Bill of Rights in 2021, and as noted above, in mid-2023 the White House secured voluntary commitments to a code entitled Ensuring Safe, Secure, and Trustworthy AI. Other organizations have published guidance, such as the National Institute of Standards and Technology (NIST) document entitled "AI Risk Management Framework". Like Canada's Voluntary Code, these instruments are voluntary and non-binding.
- In 2022, certain states began to regulate the use of ADM technologies in the employment context, such as candidate screening and employment decisions (for example, New York and Illinois). Certain state data privacy laws also contain provisions regarding ADM systems.
- From a consumer-protection perspective, Colorado has enacted certain Consumer Protections for Artificial Intelligence, which is signed, but doesn’t come into effect until 2026. The State of Utah, in turn, enacted the Artificial Intelligence Policy Act, which among other things establishes liability for use of artificial intelligence (AI) that violates consumer protection laws if not properly disclosed.
- California is working on Bill AB-2013 Generative artificial intelligence: training data transparency, which requires developers of generative artificial intelligence (GenAI) systems or services that are made available for Californians to use to post on their website documentation regarding the data used to train the system or service, including high-level summaries of the datasets used. Again, if passed, this would come into force in 2026.
- European Union: The EU has enacted the EU AI Act, which enters into force on August 1, 2024, but most of the AI Act's requirements won’t be effective until August, 2026. It's worth noting that the AI Act will apply extraterritorially, to many categories of actors, even outside the EU, for example, to providers or deployers of AI systems that have their place of establishment outside the EU, if the output produced by the AI system is intended to be used within the EU.
Conclusion
For Canadian companies, all of this shows the complexity of navigating the AI regulatory landscape. At Field Law, our Emerging Technology Group monitors current developments in the law to advise clients on use of AI systems, the use of workplace policies, effective employment agreements, supplier contracts, and contract review. Contact Richard Stobbe for assistance understanding how these evolving regulations could impact your business and how to stay compliant in this rapidly changing area.