How to Navigate AI Regulation: A Loyalty Perspective | Part I

Albert Luk & Kienan McLellan

 


The EU AI Act passed on March 13, 2024. Billed by some as the world’s first major AI law, the EU AI Act stands side-by-side amongst a patchwork of existing or proposed AI laws and regulations. For global loyalty players, this patchwork simultaneously provides both clarity and confusion; clarity is provided in the rules of engagement finally being set, but confusion is also created in the overlapping and—arguably—contradictory geographic rules of engagement that make the operating terrain difficult.

Confused? Don’t be. This blog will cut through the complexity in two parts. Part 1 will provide a legal overview of how AI regulation is developing. In Part 2, we’ll provide some best practices for how loyalty programs can navigate these regulations.

Untitled design (30)

How AI is Regulated: An Overview

The following chart is a general summary of AI regulation by some regions. More in-depth commentary for each is outlined below:

 

 Canada

 EU

 United States of  America

 United Kingdom

Centrally regulated?

Anticipated to be centralized at federal level. Artificial Intelligence and Data Act is not yet law as of March 2024.

Yes, through the AI Office.

Not at the federal level.

No. Proposed centralized coordination body.

Philosophy

Proposed to be tiered based on risk levels. Applies to all industries.

Tiered between unacceptable, high risk and low risk. Each risk tier requires different risk management requirements. Applies to all industries.

Voluntary-based guidance by sector or industry lines, resulting in possible different guidelines by industry.

Consensus-based guidelines by sector or industry lines, resulting in possible different guidelines by industry.

Prohibited Use of AI
(Most relevant loyalty prohibited uses summarized)

  • Proposed to be facial recognition
  • Reinforcement of bias
  • Biometrics
  • Emotion-recognition systems at work and in education
  • Facial recognition
  • Social credit scoring, biometrics
  • Reinforcement of bias
  • Facial recognition
  • Reinforcement of bias
  • Violation of civil liberties
  • Biometrics
  • Facial recognition
  • Reinforcement of bias
  • Biometrics

What they have in common:

  • AI should be safe, secure, and follow best practices in privacy and cybersecurity.
  • AI should be transparent to the user, not only in the disclosure of how AI is being used in decision-making, but also in how that decision-making is arrived at.
  • AI should not discriminate, be biased, or reinforce existing discrimination or bias.
  • Human-centricity.
  • Accountable to existing compliance, legal, and regulatory obligations.


The United States Approach: Standards & guidelines along sectoral lines

On the Federal level, the US approach to AI regulation can be described as an industry-specific consensus approach resulting in standards and guidelines over prescriptive rules and laws. Underpinning this approach is an emphasis on AI protecting the privacy, civil liberties, equity, and civil rights of Americans.

The White House’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence in October 2023 is, to date, the largest position on AI at the Federal level. Given it is an Executive Order, it does not have force of law and only applies to Federal agencies. As such, AI guidelines and voluntary adoption of measures—such as the big technology firms agreeing to adopting of AI watermarking—have been produced through a consensus driven approach amongst industry and political leaders. US states have passed AI laws which are generally addressed below under “The Commonalities” heading.

The EU Approach: Tiered systems to a comprehensive framework

The EU AI Act is, in contrast, a comprehensive and risk-managed approach. The EU AI Act is tiered between:

  • Unacceptable AI systems (in the loyalty space, the most relevant unacceptable uses would include emotion-recognition systems at work and in education and facial recognition)
  • High risk AI systems (e.g. AI in vehicles)
  • Low risk AI systems (all other systems which are not unacceptable or high risk).

An AI Office within the EU will be established as part of this act. For high risk and low risk AI systems, the EU AI Act sets out prescriptive risk management requirements such as data equality, documentation and traceability, accuracy, cybersecurity, and conformity assessments. Penalties for non-compliance can be steep, including fines of up to 7% of global turnover/revenue, or 35 million euros.

Canada and United Kingdom: A mirror of EU and USA

Canada’s proposed Artificial Intelligence and Data Act (AIDA) is not yet federal law. It is currently in the House of Commons as Bill C-27. As is currently drafted, its approach is closer to the EU’s. Canada’s AIDA has the same proposed tiered approach as the EU’s AI Act and requires “high impact systems” (e.g. healthcare, employment and hiring decisions, AI to guide whether to provide services and at which price point) to utilize similar risk management requirements as the EU.

The United Kingdom adopts a similar approach to the US. The framework takes a light touch by encouraging industry specific consensus and consciously not “rushing to legislate too early,” in the words of the UK government in an August 2023 policy paper entitled, “A pro-innovation approach to AI regulation.” Instead, the UK is empowering current regulators to assess how AI will impact their jurisdiction. Central coordination is planned via a central coordinating body. It bears noting that the continuing inter-connection between the UK and EU economies means that the UK will be impacted by the EU AI Act to a greater extent than their Canadian or American counterparts.

Untitled design (31)

The Commonalities

These differing frameworks all share commonalities on how AI should be governed. Different terms of art are employed, but the common principles can be summarized as follows:

  • AI should be safe, secure, and follow best practices in privacy and cybersecurity.
  • AI should be transparent to the user, not only in the disclosure of how AI is being used in decision-making, but how that decision-making is arrived at.
  • AI should not discriminate, be biased, or reinforce existing discrimination or bias.
  • Human-centricity.
  • Accountable to existing compliance, legal, and regulatory obligations.

This concludes our legal summary of current AI regulation. In Part 2, Bond covers AI usage best practices within the loyalty program industry.



About the Authors

Kienan McLellan is Bond’s Director, Analytics & AI Solutions. Albert Luk is Bond’s General Counsel and Chief Privacy Officer.