When Canadian Jake Moffat’s grandmother died in late 2022, he was at least comforted by an assurance from Air Canada’s helpful AI chatbot that he could get a compassionate rebate for his airfare after the event. That was a nice thing for an airline to do, made all the easier with the help of a friendly AI, and it eased the sting of an unexpected expense.
There was just one problem.
Air Canada never had a policy of rebating airfares after the event on compassionate grounds. Passengers were required to obtain the compassionate discount before the event, not afterwards. Air Canada duly refused the post-event rebate, and Jake Moffat took them to court. He won the case, and Air Canada won a big black eye.
Less than two years later, British logistics company DPD’s AI chatbot began giving strange answers to customers seeking updates on where their packages were. These answers included profanity, insults about their own company, and even a mocking poem criticizing DPD’s service. Nobody sued DPD, but probably because we’ve all experienced a package arriving late, it became fodder for bad jokes and conversation in every pub in England.
The fascinating thing about these events, and many others like them, is the narrative. Hardly anyone said, “The vendor-supplied AI was insufficiently programmed and hallucinated a response.” Mostly, people said “The company was careless.” The first of these responses is an understandable glitch that we all expect from computers. The second is a reputational disaster.
When AI systems emulate humans, it’s inevitable that we attach human qualities to them. We would not do this with a simple question-and-answer lookup table. But when an AI system goes out of its way to emulate human language, empathy, conversation and understanding, and especially when it goes wrong, it’s also natural for us to attribute human qualities to it such as carelessness and laziness. The AI then becomes a spokesperson for the company, just as a human spokesperson would, and as such it carries the reputation of the business with everything it does and doesn’t do.
As a senior manager, it’s bad enough dealing with the fallout when human staff damage the reputation of the business. But a mis-programmed AI can do this a million times in a single night, creating a million bad customer interactions, each of them seeming deeply personal. It’s a risk that can cascade out of control in a matter of hours.
What’s the solution? Do we just accept that reputational exposure to AI is inevitable? Do we ban AI from all customer interactions? Or do we find a middle path?
The answer varies with industry, and also with human expectation. There’s a quantifiable difference between an AI at a large hospital giving advice on heart surgery options (bad idea), an AI at a law firm helping you find the right lawyer (possibly okay idea) and an AI at a department store helping you find the right pair of shoes (probably a good idea). As Air Canada and DPD found out the hard way, people care about their travel and their delivered packages, so a reputational exposure to AI in these areas needs deep thought.
Just as no wise parent gives a two-year-old more orange juice than they’re willing to clean up from the carpet, so no organization should lean on an AI or a chatbot harder than their reputation or industry can stand. It’s the old trick of considering the worst-case scenario. If the worst outcome of an AI going wrong (and you must assume that the AI will go wrong at some point) is a dead patient or a massive lawsuit, then that’s an initiative that the senior leadership needs to stop cold in its tracks, no matter how keen the young people are.
And what about those other cases – the ones where the stakes don’t seem quite so high? The truth here is that the world is full of risks and misinformation. The risks and misinformation that really make us angry are the ones that are unexpected, the wrong roads we end up on when we were sure that a trusted source would set us on the right roads. Nobody is too surprised when our Uncle Phil the road worker gives us bad legal advice, but when a lawyer does it, we get furious. It follows that a big part of the answer with reputational risk from AI, in cases where we feel we can tolerate some level of risk, is simply to tell the truth.
Every customer-facing AI should come with a clear warning, as medications do, delivered in a way that can’t be skipped over. Simply tell people the facts. AI is experimental, often unreliable, very useful – especially if you need an answer at two in the morning – but in no way should it be treated as your sole source of advice. For the foreseeable future, and in critical situations, AI is no substitute for a professional opinion and should not be mistaken for one. It’s all about making sure that there’s clear labeling on the bottle.
Do that, and you’re protecting your customers and yourself from a bad reaction.




