How to Navigate AI Regulation: A Loyalty Perspective | Part II

Albert Luk & Kienan McLellan

 


In Part 1 of our series, we summarized the evolving AI regulation landscape by region. In Part 2, we will explore the effects and changes of AI within the loyalty landscape and best practices that businesses should keep top-of-mind when navigating them.

Untitled design (41)

How Loyalty Players Can Navigate

Loyalty solutions stand at the crossroads of innovation and responsibility when it comes to AI interruption. As set out in Part 1, regulators require all stakeholders to adhere to customers’ privacy rights while using AI to prioritize enhancement of their experience. As such, any advancement in AI capabilities should have benefits for the end customer with emphasis on transparency and trust, given the principle of value exchange within these programs.

Transparency in Value Exchange

As a core principle of any loyalty program, there should ideally be an equivalent value exchange between the loyalty program and its members. This exchange must be rooted in transparency. In an era where the technological capabilities of AI allow for collection of facial cues or biometrics, it’s crucial that innovation is compliant and that it doesn’t come at the cost of member trust or ethics.

AI regulation at its core is built on the foundation of privacy laws. Bond continues to advocate that programs should be transparent about the data that they collect of its members, how it’s used, and the value exchange. This transparency is not only table stakes for compliance, but it also increases a member’s trust and loyalty to a program.

Avoidance & Mitigation of Bias

AI has an incredibly advanced ability to digest and process disparate data, which can be used in the creation of increasingly personalized interventions within known-customer solutions. However, when not carefully monitored or crafted, this double-edged sword leaves with it the possibility of perpetuating biases, especially within financially-driven loyalty program structures.

To play on the familiar data and analytics refrain of “garbage in, garbage out,” AI regulation seeks to prohibit “bias in, bias out.” From hiring algorithms to healthcare diagnosis systems, the full potential of AI will only be realized when this bias is mitigated—when businesses and loyalty programs everywhere can benefit from and rely on accurate results. Testing, reviewing, and quality assurance processes for data can all help promote accuracy; at Bond, we test and monitor all of the data utilized in our machine learning and AI models to filter out and prohibit bias. Until larger strides are made, businesses should be aware of the technical limitations of AI in order to implement other—often human— methods of limiting bias.

Untitled design (39)

Prioritization of Human-Centric, Customer-First Outcomes

Given that AI has the potential and power to increase efficiency and become increasingly granular with customer interactions, it’s important to use these capabilities to promote customer growth outcomes so that these same customers may become beneficiaries of this developing technology. This includes exercises that develop better predictions of customer needs, more timely support responses, and increasing relevance within the scope of loyalty.

As generative AI continues to produce content, the feedback we receive is that people know when something “is produced by a machine” without that requisite human touch to facilitate true bonds between programs and its members. The predictive value of anticipating customer needs through AI must be balanced with human-centric touchpoints. This is why human intervention is embedded in our own processes—because it ensures authentic, memorable touchpoints, rather than the feeling of machine-driven, pure-profit capture.

Emotional Analysis in the Age of AI

Bond has developed AI-backed solutions to monitor and evaluate emotional loyalty for programs that we manage or analyze. This happens within the scope of data that customers have knowingly offered up and, in exchange, we are able to provide a better value proposition.

These technologies primarily utilize natural language processing (NLP) to better identify emotional responses and their applicable loyalty drivers on text data. So, while it’s crucial that loyalty programs can deepen our understanding of customer needs for better overall outcomes, this approach never infringes upon the customer’s right to privacy and autonomy with tools such as facial recognition or other toolsets outlined by either regulation or industry consensus.

Untitled design (40)

Conclusion

AI regulation is still in its nascent stage. As both the application of AI and its regulation develops, Bond will continue to monitor progress and our own point of view on how AI can be a value-add within the regulatory rules of engagement.

 

If you have any questions or comments about the use of AI, or its role within loyalty programs, we invite you to contact Kienan.McLellan@bondbl.com.



About the Authors

Kienan McLellan is Bond’s Director, Analytics & AI Solutions.
Albert Luk is Bond’s General Counsel and Chief Privacy Officer.