AI for Customer Service
Everything a Business Owner Needs to Know Before Deploying a Chatbot!
All Website Customer Service will Eventually Be Automated!
LLM-Based chatbots are going to be the future of B2C businesses. They allow a business to supplement the functionality of a live customer service representative, while cutting costs by not having to pay a live agent to be on-call for customer inquiries.
Here's How We Know
According to Userlike, 68% of users enjoy the speed at which chatbots answer. The results speak for themselves, as a majority of respondents find it convenient to interact with chatbots, as opposed to conventional methods of customer service, such as agents or traditional form-fields in webpages.
As AI systems improve in quality at breakneck speeds, this amount is only set to grow Furthermore, chatbots tend to provide higher rates of customer satisfaction, since an LLM is divorced from human flaws such as making mistakes under pressure or forgetfulness. They are much more successful in driving user growth than traditional formfields, and by the middle of the decade are set to replace them.
Chatbots as a Painpoint
LLM's Hallucinate
All indicators point towards businesses needing a chatbot to automate customer service, however even large companies are currently running into trouble getting their B2C chatbots off the ground. Sometimes, as we will see, a chatbot can hallucinate in response to a user query, thereby providing harmful and inaccurate responses that hurt the business or the end-users themselves.
This happened recently at the time of writing in Canada, where a user asked an honest question in regards to the refund policy of a popular airline. The chatbot responded with a false policy, which did not exist. According to a ruling, however, the airline by law had to accept the refund policy given by the chatbot.
By legal precedent, if this happens to your business in Canada and other countries with similar legal frameworks, then you are now also required to fulfill the terms presented to an end-user by the chatbot!
For example, if I can prompt your concert promotion chatbot into giving me a free ticket to the next show, then by legal precedent I am entitled to see my favourite Metal band at $0.00!
Prompt Injection Attacks
LLM's are smart- when it comes to writing. Teaching LLM's deductive reasoning is actually a difficult area of research right now. The one pitfall facing B2C customer service automation right now is that they inherently lack the critical thinking ability of a median person. This makes LLM's prone to a very unique type of cybersecurity threat- prompt injection attacks.
Cybersecurity surrounding LLM's is a very strange and ironic concept. In traditional cybersecurity, hackers may use two kinds of attack to compromise your data:
Software-Based Hacking- exploiting code vulnerabilities, SQL (database) injections, and use of malicious trojan programs to either seize control of your systems or read sensitive information therein
Social Engineering- Use of deception, charisma and superficial charm to manipulate a victim into providing access to systems.
A software hacker can be thought of as a programmer using their knowledge for wrongdoing, while a social engineer is comparable to a seductive pickup artist or a slippery scammer. Both are hackers by definition.
Prompt Injection Attacks As A Novel Cybersecurity Risk
What makes prompt injection unique, is that it is a combination of both traditional types of hacking. It is an attempt to exploit vulnerabilities in a software application (your chatbot) while also using manipulation of human language (social engineering) to trick an "intelligent" entity into doing something against its best interests.
Prompt injection is very difficult to deal with, because the vulnerability it exploits lies not in a line of code, but much like that of a gullible person, lies within the parameters of the LLM's brain. Given that even the smallest production-grade LLM's contain 7 billion parameters (llama-7b, mistral-7b, etc), it is nearly impossible to gaze into its neurons and know exactly what the model is thinking as it responds to a potentially dangerous prompt.
Furthermore, there are so many prompts that could trigger unhelpful responses, that even a model fine-tuned to be better aligned with replies that we prefer (i.e. ones that don't produce non-existent refund policies) may not be truly aligned (see Anthropic paper on Sleeper Agents!)
EquoAI specializes in developing battle-ready Generative AI applications for your business, taking into account the specific details of your business requirements to ensure that whatever can go wrong, does not go wrong!
We would develop a custom set of guardrails to protect your applications from prompt-injection attacks, specific to the types of threats that could arise in a B2C application.
In the case of airline or concert tickets, we'd identify attempts to manipulate your chatbot into giving things away for free, or simply to prevent it from hallucinating and giving it away to the user for free.
Additional measures we would take include:
Input Sanitation- How many tokens long is this prompt?
Output (RAG-Based) Sanitation- is there a document within our policies that actually answers the user's question?