news + views + events
Back
Addressing the Legal Issues Arising from the Use of AI in Canada

In September 2023, the Canadian government released new guidelines highlighting the increasing impact of Artificial Intelligence (AI) on businesses. Intellectual property, liability, data privacy, bias, transparency, ethics, and government regulations are all areas where organizations can take steps now to help navigate AI's legal complexities and leverage its potential while mitigating risks. 

 

In September, 2023, the federal government issued new guidelines for the use of Artificial Intelligence (AI) tools in the public sector workplace. This guidance is one more indicator of how rapidly AI has evolved and infiltrated various sectors of the Canadian economy, influencing the way many businesses operate, and the way individuals interact with technology. AI systems are becoming capable of more complex problem-solving, and even – arguably – engaging in creative endeavours. While AI can bring numerous benefits, it also raises a host of legal issues.

This article explores some of the most common legal issues arising from the use of AI. We emphasize the need for careful consideration when using this potentially transformative technology.

Intellectual Property Rights

One of the most interesting legal concerns surrounding AI is the question of intellectual property (IP) rights. There is no doubt that AI systems are capable of generating and manipulating content – some argue that AI can apply its skill and judgement to take creative steps that are akin to the creativity of humans. This leads to a philosophical ambiguity regarding the ownership of AI-generated works. For instance, who 'owns' the rights to a piece of music composed with the use of an AI program? Who is the 'author' of the next viral photograph or video to make the rounds of social media? To address these questions, do we need updates to existing copyright and patent laws? 

In the past, the US Copyright Office has been crystal clear that, for copyright to apply, a human author must be involved. Additional clarification was issued by the US Copyright Office in 2023, that works created with the assistance of AI may be subject to copyright protection, as long as the work involves sufficient human authorship. AI systems cannot author or own copyright-protected works.

The Canadian Intellectual Property Office has not issued the same formal guidance as the US Copyright Office, but is likely to apply similar principles. The Canadian federal government launched an interesting consultation on the topic of AI and copyright, which was published in 2021, before the widespread adoption and use of newer AI tools such as ChatGPT. The 2021 paper proposes several different options, including the creation of "a new and unique set of rights for AI-generated works", which would likely require changes to the Copyright Act.

While changes and clarifications to existing IP laws will take time, businesses can take some steps on their own to clarify ownership of content that is created using AI tools. For example, contracts and internal policies can be developed, to clarify expectations of ownership for works created with the use of AI systems, just as employment agreements confirm the ownership of works created in the course of employment.

Liability + Accountability

When prompted, AI can make decisions – seemingly autonomous decisions – and take actions based on vast amounts of data. When AI systems take action, and cause harm, determining liability becomes a complex task. Should the AI developer, the user, or the AI itself be held responsible for the consequences of these actions?

From a legal perspective, where AI systems are used to make decisions, business contracts should be adapted to deal clearly with accountability, liability, and indemnity obligations, in the event of damages, harm or loss that flows from those decisions. Insurance coverage should be reviewed to determine how it applies when losses are caused by the decisions made by AI systems.

Data Privacy + Security

Compliance with data privacy regulations, including Canada’s Personal Information Protection and Electronic Documents Act, remains critical, regardless of which technology is used. AI technology involves unique privacy-related concerns. AI relies heavily on massive datasets, often involving sensitive personal information. When employees are deploying AI tools in the workplace, workplace policies should address the use of personal information, and control how personal information is input, to ensure compliance with security and privacy obligations.

Bias + Discrimination

AI systems can inadvertently perpetuate biases present in their training data, leading to discriminatory outcomes. Legal challenges arise when AI is used in hiring, lending, and criminal justice decisions, as it can disproportionately impact certain groups. Establishing guidelines and regulations to address AI bias and discrimination is vital to ensure fairness and prevent legal disputes. Amazon had to famously retreat from the use of AI in making hiring decisions, when the system proved a clear bias against female candidates, reflecting back the inherent bias within the dataset analyzed by the AI tool.

Transparency + Explainability

The "black-box" nature of some AI algorithms raises concerns about transparency and explainability. Stakeholders, including regulators, businesses, users, and affected individuals, may justifiably demand explanations for AI-driven decisions. Legal frameworks should require AI systems to provide transparent explanations, allowing users to understand the rationale behind AI-generated outcomes. However, there are numerous examples of AI decisions that humans simply do not understand.

Ethical Concerns

Ethical dilemmas surrounding AI use can have legal repercussions. Questions about the use of AI in surveillance, autonomous weaponry, hiring decisions, loan approvals, and deepfake technologies highlight the need for ethical considerations to be enshrined in the legal framework. Addressing these concerns will remain one of the defining challenges of AI technologies, and the answers will impact public trust in AI systems.

Canadian Domestic Regulation

As noted in an earlier Field Law update over a year ago, the Canadian federal government has proposed legislation, Bill C-27, to modernize Canada's privacy laws with the introduction of the Artificial Intelligence and Data Act. As of the date of this article, Bill C-27 is presently before the House of Commons and has passed second reading as of April 24, 2023, but is stalled in committee review. If passed, the new law would regulate the use of AI systems for both the companies that use these systems, and providers of such systems, as well as data processors who deploy AI systems in the course of data processing.

Conclusion

AI continues to permeate many facets of daily life. To mitigate risk and make effective use of this technology, it's important for businesses to address the related legal issues. Intellectual property, liability, data privacy, bias, transparency, ethics, and government regulations are all areas where business owners can take steps, while the law (slowly) adapts and evolves to keep pace with technological advancements.

Rather than wait for federal regulation, businesses are well advised to take internal steps to monitor and control the use of AI systems, with the use of workplace policies, effective employment agreements, supplier contracts, and contract review. Contact Richard Stobbe in Calgary or Marc Yu in Edmonton to take proactive steps in preparing to adapt to changes in the legal framework as AI becomes more prevalent in the workplace. 

 

[This article was written with the assistance of artificial intelligence technology.]