When AI Chatbots Go Wrong: Understanding Liability for AI-Powered Systems
- Rikki Archibald
- Feb 13
- 5 min read
In today's fast-paced digital world, businesses are increasingly turning to AI-powered chatbots to enhance customer service and streamline operations. However, as the use of Artificial Intelligence (AI) becomes more widespread, so do the challenges and potential liabilities associated with its implementation. This article explores a recent case involving Air Canada, highlighting the importance of accuracy and reliability in AI-driven systems.

In 2022, an Air Canada customer experienced firsthand the consequences of incorrect information provided by an AI chatbot. This incident underscores the need for businesses to carefully consider their chatbot strategies and ensure robust oversight to prevent similar issues. By examining the differences between basic and generative chatbots, we aim to provide valuable insights for companies looking to adopt AI technology while mitigating risks and maintaining high standards of customer service.
Whether you are just beginning to explore the potential of AI chatbots or seeking to enhance your existing systems, understanding the implications of this case can help guide your decisions and improve your customer interactions. Read on to learn more about the key considerations for deploying chatbots and how to avoid common pitfalls.
The Case of Moffatt v Air Canada
In the case Moffatt v. Air Canada, 2024 BCCRT 149, an Air Canada customer reached out to inquire about the necessary documentation for a bereavement fare and whether refunds could be processed retroactively. He was told by the chatbot that he could request a refund by submitting an online form within 90 days of the date the ticket was issued. A link to an Air Canada Bereavement Policy page was also provided to the customer. The customer relied on the advice from the chatbot and purchased the tickets. Upon later seeking the refund, he was told that a different policy existed and an admission was made that the chatbot had provided incorrect information.

2 Types of Chatbot
Air Canada introduced Artificial Intelligence Labs to improve operations and customer experience in 2019 and has been using generative chatbots, which rely on Large Language Models (LLM), similar to Chat GPT. There are two main types of chatbots: basic chatbots and generative (LLM-based) chatbots.
Basic Chatbots:
Basic chatbots, commonly implemented across many sectors, are like automated guides that follow a set path. These chatbots ask a series of preset questions and depending on the answers, lead users down different paths. They have a set of predetermined responses for common questions. If you ask something specific, they may provide a programmed response or a link to a related article in a knowledge base. A knowledge base is a collection of articles and information that the chatbot uses to answer your questions.
Generative Chatbots (LLM-based):
Generative chatbots, like the one in the Air Canada case, provide more sophisticated answers. These chatbots use a type of artificial intelligence trained on vast amounts of text data to understand and generate human-like text based on the context. They typically have Natural Language Processing (NLP) technology, which helps the chatbot understand the meaning behind words, not just individual keywords, resulting in more natural and flexible conversations.
Key Differences Between Basic and Generative Chatbots
Flexibility: Basic chatbots follow a fixed path, while LLM-based chatbots can handle a wider range of questions and provide more personalized responses.
Understanding: Basic chatbots recognize specific keywords and use predefined responses, whereas LLM chatbots understand the context and meaning behind words, allowing for more natural interactions.
Liability Issues with Generative Chatbots
"The judge found that although the chatbot has an interactive component, it still forms part of Air Canada’s website, which Air Canada has a responsibility to ensure contains accurate information." ( Moffatt v. Air Canada)
Generative chatbots are susceptible to what is known as “hallucinations.” According to IBM, hallucinations occur when a large language model (LLM) perceives patterns or objects that are non-existent, creating nonsensical or inaccurate outputs. This was the case with the Air Canada chatbot, which provided incorrect information to the customer.
The customer captured a screenshot of the advice provided by the chatbot and, upon being dissatisfied with Air Canada’s response, took the matter to small claims court. During the case, Air Canada argued that it should not be held liable for information provided by one of its agents or representatives, including a chatbot. They argued that the chatbot was a separate legal entity that is responsible for its own actions and that the customer should have checked the policy page provided in the link. The court rejected this argument, questioning why a consumer should have to verify one section of the website against another (the chatbot).
The Tribunal Member found that although the chatbot has an interactive component, it still forms part of Air Canada’s website, which Air Canada has a responsibility to ensure contains accurate information. The tribunal found that Air Canada “did not take reasonable care to ensure its chatbot was accurate” and determined the claim constituted “negligent misrepresentation.” The customer was awarded a small amount, equivalent to the difference between the cost of the flight and a discounted bereavement fare.
"This case highlights the importance of accuracy and reliability in AI-powered customer service systems."
Conclusion
This case highlights the importance of accuracy and reliability in AI-powered customer service systems. For medium to large companies considering the implementation of chatbots, it underscores the necessity of proper oversight and quality assurance. Basic chatbots, while less flexible than generative models, offer a reliable solution for straightforward customer queries without the risk of "hallucinations."
If your business is looking to start using AI-powered customer service but has concerns about liability and privacy, it's crucial to choose the right type of chatbot and ensure its accuracy. At Sena Consulting, we specialize in helping businesses implement innovative customer service solutions, including basic chatbots that provide reliable, consistent support.
Operations can be significantly improved and streamlined with the use of basic chatbots, which offer a low-risk entry point into AI-powered customer service. These chatbots can handle a variety of routine tasks such as answering frequently asked questions, guiding customers through basic troubleshooting steps, processing simple transactions, and providing information about products and services. By automating these repetitive tasks, chatbots free up human agents to focus on more complex customer inquiries and personalized interactions. Implementing a basic chatbot can enhance efficiency, reduce response times, and improve overall customer satisfaction. For business owners interested in exploring how a chatbot can benefit their specific operations, we’re here to offer guidance and support on the best strategies for successful implementation.
Contact us today through our Contact Us page to discuss your chatbot strategy and get started on implementing better AI-based customer service.
References
IBM, 2024. What are AI hallucinations?. [Online] Available at: https://www.ibm.com/topics/ai-hallucinations#:~:text=AI%20hallucinations%20are%20when%20a%20large%20language%20model,that%20are%20nonexistent%2C%20creating%20nonsensical%20or%20inaccurate%20outputs.[Accessed 19 5 2024].
Moffatt v Air Canada (2024) BCCRT 149. Available at: 2024 BCCRT 149 (CanLII) | Moffatt v. Air Canada | CanLII
Comments