The malfunction of a chatbot employed by European delivery firm DPD sparked anxieties in the global contact center industry, leaving BPOs wondering about the reliability of AI-powered customer service solutions.
The incident unfolded when a British customer, Mr. Beauchamp, contacted DPD’s customer service center to track a parcel. Beauchamp’s experience, as reported by Sky News, was far from ideal.
The chatbot failed to connect him with a human representative or provide any helpful information. Frustrated, Mr. Beauchamp managed to manipulate the bot into swearing and even composing a song mocking the quality of DPD’s customer service, claiming it to be the worst in the industry.
Parcel delivery firm DPD have replaced their customer service chat with an AI robot thing. It’s utterly useless at answering any queries, and when asked, it happily produced a poem about how terrible they are as a company. It also swore at me. 😂 pic.twitter.com/vjWlrIP3wn
— Ashley Beauchamp (@ashbeauchamp) January 18, 2024
The interaction (which went viral on social media, garnering over a million views) highlights the potential pitfalls of chatbot technology.
Analysts say the incident might prompt many companies to temporarily hold off on chatbot implementation.
The underlying issue, analysts say, lies in the nature of AI models. These models learn from the data they are fed. If not properly maintained, they can unintentionally amplify existing biases or patterns within that data. This can lead to unpredictable and potentially harmful behavior.
The DPD incident is not unique. A similar malfunction happened with a car dealership chatbot, which mistakenly agreed to sell vehicles for just one dollar. Back in 2016, Microsoft had to shut down its AI chatbot Tay after it “turned into a nazi.”
Add comment