The urgent need for a robust legislative AI framework

TL;DR:
Artificial intelligence (AI) is rapidly advancing, with Canada leading in AI patent growth among G7 countries. However, the federal government has yet to pass robust AI legislation, relying instead on voluntary corporate codes and union advocacy. While some steps, like the proposed Artificial Intelligence and Data Act (AIDA) and the creation of the Canadian Artificial Intelligence Safety Institute (CAISI), show progress, the creative industries—especially performers—face significant risks from unregulated generative AI. Internationally, some countries in the EU and the U.S. have enacted stronger protections, with California and Tennessee introducing performer-specific laws. In Canada, ACTRA has been actively lobbying for protections based on Consent, Compensation, and Control (the “3C’s”). While waiting for meaningful legislation, ACTRA Toronto is advocating for provincial changes to protect workers and empower members to drive change.

(7-minute read)

Artificial intelligence (AI) technology continues to grow at rapid speed with the application of AI to a new task being reported almost daily. In Canada, the number of AI patents filed by Canadian inventors increased by 57 per cent in 2022–23 compared to the previous year—nearly three times the G7 average of just 23 per cent over the same period.[1]

Canadian legal landscape

While there are many advantages to AI and AI tools, legislation needs to be in place to ensure AI is not used for nefarious purposes, including protections for individual citizens and workers.

While our federal government boasts Canada as being “one of the first countries in the world to propose a law to regulate AI,” it is has failed to pass any meaningful legislation, leaving it to corporations to sign-on to a voluntarily code of conduct for the responsible development and management of generative AI systems, or to unions to protect workers from unregulated AI by negotiating language into their collective agreements.

The federal government’s first consultation on AI was conducted in 2021 as part of the government’s work toward amending the Copyright Act (the statutory review of the Copyright Act officially began in 2017, but it has yet to result in any proposed legislation). The first AI consultation focused on the extent to which copyright-protected works are integrated in AI applications and the consequences of the misuse of AI technology. A second consultation on the impacts of recent developments in generative AI on the creative industries was conducted in 2023 (read ACTRA Toronto’s submission here).

In 2023, the federal government also proposed the Artificial Intelligence and Data Act (AIDA) – introduced as part of the Digital Charter Implementation Act, 2022 or Bill C-27 – which, if passed, would ensure AI systems deployed in Canada are safe and non-discriminatory and would hold businesses accountable for how they develop and use these technologies.

Most recently, the Government of Canada launched the Canadian Artificial Intelligence Safety Institute (CAISI) to leverage Canada’s world-leading AI research ecosystem and talent base to advance the understanding of risks associated with advanced AI systems and to drive the development of measures to address those risks.[2]

Canadian organizations are also leading the way in advocating for the responsible development of AI. Quebec-based Mila and the International Center of Expertise in Montreal on Artificial Intelligence (CEIMIA) recently released the most comprehensive policy report to date on gender equality and diversity in AI. The new report, entitled Towards Substantive Equality in AI: Transformative AI Policy for Gender Equality and Diversity, aims to empower states and other stakeholders to create inclusive, equitable, and just AI ecosystems.[3]

This issue is especially important to our screen-based industries where it is imperative to address how the rise of AI in the entertainment industry could disproportionately impact marginalized performers, including those from underrepresented racial, gender and disability groups.

Sidebar: With the explosion of new AI tools being the most recent disruptor in the screen-based media industry, what are the key issues facing performers? Learn more in the Performers Magazine article The 3C’s: Protecting Performers in the age of AI.
Global legal landscape

Internationally, the development or implementation of meaningful AI legislation is at various stages in a number of countries, including Australia, Brazil, India, Japan, Switzerland and the United States. The European Union is the most progressive, having passed the world’s first comprehensive legal framework – the AI Act – in March 2024 (it came into force in August 2024).

While the EU’s legislation addresses AI systems as a whole, the United States has put forth two pieces of AI legislation specific to generative AI abuse – one would protect individuals from the unauthorized copying of a person’s individuality and a second would create an enforceable new federal intellectual property right allowing victims of nonconsensual deepfakes and voice clones to have them quickly taken down and recover damages.

Both California and Tennessee have enacted AI-specific state laws to protect performers. In January 2025, two California laws will come into effect – one will protect against the unauthorized exploitation of digital replicas of deceased personalities and the second will protect individuals when it comes to the use and distribution of digital replicas of their likeness.

The state of Tennessee became the first state in the U.S. to enact legislation designed to protect songwriters, performers and other music industry professionals against the potential dangers of AI. The state’s “ELVIS Act” (the Ensuring Likeness, Voice, and Image Security Act), which came into effect July 1, 2024, ensures vocal likeness is now included in that list.

Canada’s screen-based industry

The growing use of unregulated generative AI in Canada’s film and television industry and its potential impact on the livelihoods of Canadian workers, specifically performers, remains a growing concern.

ACTRA has been an active advocate for a robust federal legislative framework to protect Canadian performers from the misuse of AI since 2021. The union has participated in multiple consultation processes, appeared before numerous Committees to advocate for changes to proposed legislation and has alsolaunched lobbying campaigns calling for legislation to protect workers from the misuse of AI.

Most recently, ACTRA was on the ground in Ottawa meeting with key decision makers and Members of Parliament to discuss the urgent need for a robust AI and modernized copyright legal framework.

At the centre of ACTRA’s advocacy – and the pillar of any bargaining proposals – are the concepts of Consent, Compensation and Control – also known as “the 3C’s.”

By prioritizing the concepts of the 3C’s, we can shape an AI-driven future that respects individual performer rights, promotes fairness of use, and aligns with the values of a diverse and interconnected global film industry.

So, what exactly would this look like? A performer’s right to: consent to, and be credited for, the use of their NIL Rights[4] in new works in the training of AI models; be compensated for all AI uses of their NIL Rights; and have control over the use of their NIL Rights (and once a digital replica is made, any company dealing with this data must commit to safe storage and tracking of these files).

Become an AI ACTRAvist

While we wait for federal and provincial governments to implement any meaningful legislation, ACTRA Toronto’s AI Sub-Committee has launched an AI campaign calling on the Government of Ontario to amend the Working for Workers Act to include protections for workers in Ontario from AI as well as ensure the arts, including the film, television and digital media industry, are included in Ontario’s Trustworthy Artificial Intelligence (AI) Framework to support AI use that is accountable, safe and rights based.

There is a lot at stake. Ultimately, performers have an important role to play. Your union has provided you with the tools and now it’s up to you to activate change.


References

[1] Canada launches Canadian Artificial Intelligence Safety Institute, Department of Innovation, Science and Economic Development Canada, November 12, 2024;

[2] ibid;

[3] New policy report presents key recommendations for gender equality and diversity in AI, Mila – Quebec AI Institute, November 27, 2024;

[4] NIL Rights: collective term encompassing personal voices, sound effects, actions, behaviour, images, likenesses and personalities.

Recommended Articles