Why Talent Acquisition Leaders Should Care About The Quality Of Their AI Recruiting Assistant
Simply having a recruiting chatbot won’t be enough to differentiate yourself
Posted on 02-18-2020, Read Time: Min
Share:
Quality is paramount when it comes to recruiting AI and chatbots. It’s possible for even a single bad interaction with a recruiting chatbot to cause a candidate to drop off. In a recent Digitas survey, 73% of respondents claimed they would never use a chatbot again if they had a single bad experience during their first encounter. So, it’s important for HR AI buyers to be able to evaluate chatbot quality.
Thankfully, most AI for talent acquisition vendors on the market have built reputable products. Many recruiting chatbots available today will elevate your employer brand and deliver a positive candidate experience to each candidate. But this is because the bar has been set so low. 85% of candidates are used to not hearing back from employers at all after submitting a job application. In the context of the current standard of application experiences, providing candidates with any response that allows 2-way dialogue is will produce dramatic results.
But, as more and more companies adopt chatbots, simply having a recruiting chatbot won’t be enough to differentiate yourself. Focus on quality now, and acquire a chatbot solution that can deliver the best possible candidate experience. I’ve put together the following 4 examples to help illustrate and shine a light on some of the key conceptual differences in the conversational experience that different chatbots can create.
Siri won’t let you talk about music the way you like to talk about music

Users engage Siri on an almost infinite variety of topics. The level of understanding that Siri has is quite impressive considering the breadth of functions it’s able to perform. However, excessive breadth often means that Siri doesn’t quite have enough depth about each topic. In this example, a user has engaged Siri in a conversation about music. Siri has a very surface-level understanding of the language humans use to communicate about music so if a user wants to get Siri to play them music they have to communicate with it in a very specific and limited way.
When we feel like we aren’t being understood, we get frustrated. Imagine being stuck in a foreign country where you don’t speak the language and being caught urgently needing to use the restroom in public. Trying to find a restroom would be an extremely frustrating experience for you. You’d use every word for restroom you knew in your native language to no avail. You’d have to be reduced to pantomiming to get to your goal. The conversational environment would constrain you to a narrow set of options. It doesn’t feel good to not have options. It doesn’t feel good to not be able to communicate using your natural way of speaking.
Similarly, candidates also don’t feel good when they are restricted to a narrow set of options. Like the restroom example, candidates urgently need a job. And like the restroom example, rather than being reduced to pantomiming, candidates would much rather speak to someone who was able to understand their language and their intent. Bear with me here. A category of recruiting chatbots called bots, only offer Y/N questions or multiple choice questions to candidates as communication options.
Bots create a restrictive experience that is better than not being communicated to at all, but they don’t create the best possible experience for candidates. Bots aren’t bad. For some companies, they may be the ideal solution. The issue with bots is their natural language processing is either not present or not sophisticated enough to understand the way we communicate. If you want your recruiting chatbot to create a personal, human, and authentic feeling conversational experience, basic bots are not your best bet.
A person strays from Poncho’s decision tree…what happens next?

Think about the last time you visited a really nice restaurant. What made your experience with the waiter so good? I imagine the waiter was attentive, understood your needs, was able to offer relevant personalized recommendations, had a personality that suited your mood, and either made no errors or smoothly handled any errors that happened.
Imagine a waiter got your order wrong. He repeated his incorrect perception of your order back to you. You said, “No no, I said I’d like the seafood pasta.” The waiter ignored you and brought out the gnocchi. You said, “No I wanted the seafood pasta.” The waiter responded, “Sure, pasta,” and brought back another plate of gnocchi. Would you stay or get up and leave? You’d probably only stay if you were really hungry, or if you were on a tight schedule, but you wouldn’t enjoy the experience even if the food was good. You’d probably write a bad review of the place on Yelp and tell your friends about how bad your experience was.
The example above is of a now-defunct chatbot named Poncho, who was designed to tell people about the weather. In the example above, Poncho doesn’t interpret the person’s question correctly. Then, when the person tries to clarify their question, Poncho brings out a plate of gnocchi instead and another plate of gnocchi afterward. Get it?
Some chatbots run on something called a decision tree. They’re difficult to identify based on question type because they can ask Y/N questions, multiple-choice questions, or even open-ended questions. You know you’re dealing with a decision tree powered chatbot when it’s bad at fixing a mistake or doesn’t seem to understand your intent. Poncho was powered by a simple decision tree.
To get a better understanding of how a decision tree based bot approaches a conversation let’s return to our terrible imaginary waiter – Waiterbot. Waiterbot, would approach you looking to follow this path: greet, gather order, deliver the order, refill water, present check, thank, reset.” In the pasta example I shared above, you were still on the gather order stage, but since Waiterbot had received a response already and interpreted it, Waiterbot had moved to “deliver the order.”
Once Waiterbot charts a course, there’s no turning back. This isn’t something that most HR leaders want in their recruiting chatbot. Imagine if you had a chatbot that derailed the entire conversation like Poncho the instant a candidate deviated slightly from the chatbot’s set path. You company is likely not on Yelp, but you’d get some bad reviews on Glassdoor and social media for sure!
This would never happen with a conversational AI because a CAI doesn’t work off a decision tree. A CAI can flow and move fluidly with the conversation because it’s able to recognize user intent and needs just like a good waiter. If you want to have a good waiter at your candidate experience restaurant, opt for a conversational AI over a decision tree based chatbot.
TayTweets learns the wrong things and says the wrong things

Let’s go back to when you were a child. Your parents raised you in a, hopefully, controlled environment and nurtured you, provided you a positive environment, and set boundaries to ensure you didn’t get into anything bad. Then as you were growing up you were exposed to influences from many other sources, friends, peers, teachers, etc. Some sources were good influences. Others were bad. Effort and your own innate capacity for change and judgment led you to eventually grow into a positive, well-functioning adult member of society.
An AI is simultaneously smarter than you and dumber than you. It’s smart because it learns really quickly. It’s dumb because it doesn’t know what to learn or what’s good or bad to learn. It also doesn’t have any desires of its own. It just absorbs whatever you provide it. This process of learning in an AI is called machine learning.
Machine learning is a simple concept to understand. Someone provides an AI data and a framework for interpreting the data. The AI gets better at managing, responding, and understanding the data over time. Great care must be taken in creating and following a governance process to ensure the AI does not learn from the wrong data.
Take Tay Tweets for example. Tay was an AI that Microsoft released on Twitter that used machine learning to evolve its communication style over time. The machine learning algorithms used general unstructured data fed to it by the Twitter community that was not subject to screening by the Microsoft team.
Within a short time of launching, Tay went from writing mild tweets such as “So glad to meet you I love people,” to sending our derogatory and racist messaging through its account. Tay’s a great case study on what can go wrong if you don’t have the right governance around the data your using to power machine learning.
Now, Tay Tweets is an extreme example that’s quite unlikely to happen with any recruiting chatbot. There isn’t a single chatbot vendor allowing unstructured data to enter their AI’s machine learning process. But, rigorous governance is necessary for all machine learning programs powering recruiting AI.
A Note to Talent Acquisition Leaders
AI is a fascinating new technology that is transforming talent acquisition and HR. If you take nothing else away, remember this – picking an AI that creates a personal, intuitive, and human experience is always a winning strategy.
This article first appeared here.
This article first appeared here.
Author Bio
Ameya Deshmukh is Marketing Manager at Mya Systems. |
Error: No such template "/CustomCode/topleader/category"!