Paulo Azevedo, 2/27/2019

Demystifying the development of chatbots - Part 2: Why do it?

In my previous blog post, I talked a bit about the usability of chatbots. While the initial plan was to talk about more technical aspects of it on the second post, I've decided to leave that for another post in the future and, instead, to talk about why companies should care about it.

To get to that point, let's first revisit a bit of computer history. In the mid-1940s the first general-purpose electronic computers came to be. Let's call this the first computer revolution. They occupied vast rooms, consumed monstrous amounts of energy, and by today's standards were extremely slow. Not to mention that usability was terrible. The programmers, usually an all-female team, were known back then as "computers", a word that nowadays designates the equipment, instead of the operators. They had to plug and unplug cables to connect different circuits together, a difficult job that required lots of training and meticulous execution. Back then, a country would be lucky to have one such device per major university, and they were neither interconnected, nor standardized. A program written for a computer would work solely on that computer. In fact, many such computers were wired to execute those programs, so even switching programs was time-consuming and difficult.

About 40 years later, in the early to mid-1980s, we had the first PCs, or personal computers. The goal those devices achieved in many countries was of one computer per home, and for this text we shall call this the second computer revolution. The fact that the World-Wide Web came to be later in that decade helped very much with the popularization of those devices in the 1990s, but they were way more attractive in general, and there were a few reasons for that. They were smaller, consumed a reasonable amount of energy, and were way faster than the first electronic computers. Another aspect that helped those devices, especially from the early 90s on, was the fact they had graphical user interfaces, which meant users were able to get their work done with little or no training.

Of course, many people didn't see the need to have a computer at home, no justification for the costs. Some of the early adopters saw a lot of potential in those machines, but couldn't do their day to day work on them, as their employers didn't yet have computerized systems. Many of those early adopters then did things such as playing simple games or writing simple programs, what was seen mostly as gimmicks by everyone but other computer enthusiasts. Nowadays, most office workers cannot imagine how they'd get their job done without computers. In fact, computers increased productivity so much, that many one-person jobs from nowadays would take dozens of people if they were to be done manually. Before PCs and the Web, nobody knew how to buy a travel ticket if not by going to the carrier counter at the airport or station, or maybe to a travel agent. Nowadays, almost nobody considers purchasing their tickets in person, and will usually do it online. Even getting groceries or flowers online is becoming a reality in many places.

Two decades later, in 2007, there was the advent of smartphones, with the launch of the first iPhone. Some consider the smartphone to have been born a bit earlier, with Nokia's Symbian operating system and the N95 mobile phone, but certainly with the iPhone we could say we were in the smartphone era. We can call this the third computer revolution. In any case, we found ourselves with the possibility to have one computer not per university nor per home, but one computer per pocket. Furthermore, this revolution took about half of the time taken between the first and second computer revolutions.

Like with the second computer revolution, the majority of people didn't immediately grasp the impact of this technology. It's understandable, as one of the most downloaded apps back then was called iBeer, which allowed you to drink virtual beer, displayed on your screen, and emit a loud burp at the end. Clearly, it was just a stunt, and it took a few years for people to start buying apps, and even longer for people to start buying things from within apps, including travel arrangements. Nowadays, though, it's common. It hasn't surpassed sales on the desktop yet, but it might in the near future.

Personally, I believe voice interfaces will make the 4th computer revolution. It's not a computer per university, household or pocket. Instead, it's about being immersed in technology, stating your queries out loud anywhere and getting a response. The vision is to always get the knowledge you need easily, have it at hand, and not be bottlenecked by the access to information or ability to act on information or will. This means not having to take a computer out of your pocket to research the data you need, as well as performing other actions. That includes making purchases and bookings, controlling a smart home, being reminded of a task or a recipe’s ingredients, and, of course, communicating with other people.

It is evident that many of the aforementioned use cases are already implemented one way or another in nowadays' voice assistants, but the experience is still not as seamless as it could and should be. For the time being, we may consider asking Alexa to tell a joke or asking Google Assistant for its favorite ice cream flavor both as gimmicks. Of course, those things will still work in the future, the same way iBeer still works on nowadays' iPhones. But we can have more.

To get there, we need to experiment with the technology, build systems around what's already available, in a way that generates value. The increased usage will not only help improve voice and language models, helping create a more seamless experience. It will also help "normalize" the use of voice assistants, shifting the zeitgeist the same way that made it possible to turn buying expensive goods or booking holidays on a smartphone from sci-fi to something mundane. Furthermore, whoever does that now will help shape how this technology will be in the years to come. And for something with such pervasive potential, we should indeed start early, addressing security and privacy concerns together with those usability quirks. Ultimately, working on it now will help us get there faster, which I personally see as a desirable outcome.

One example we're currently exploring at FlixTech is in the domain of Customer Service. Not the dystopian future of having humans replaced by bots. Instead, imagine yourself answering literally dozens of times a day how much luggage someone can bring on board, or if pets are allowed, or any other simple question with a straightforward answer. Those are all legitimate questions, which customers really need to know the answer for, but they don't require any critical thinking or problem-solving skills, which is where humans thrive. Instead, such queries are just wearisome to those answering. Furthermore, while answering one such question, another customer, with a time-sensitive issue, could be waiting on the queue, which is far from ideal.

During interviews with representatives from our customer service, this is, exactly the feedback we collected. They want to be there for the people that need their problem-solving skills, and this makes not only business sense, but also is the best thing we can do for those customers with time-sensitive issues. Not to mention it’s a more fulfilling task for the people working those jobs.

Assuming such a solution is successfully implemented, customers with both types of queries would benefit from reduced waiting times: those who would talk to a chatbot would have virtually zero waiting time, whereas others would benefit from reduced queue size.

Thus, when Customer Service is empowered by automation, agents can be much more productive in the situations where no robot could solve the problem. At the same time, no customer goes unanswered in their queries, especially in situations where time counts.

Of course, there could be technological challenges and hiccups along the way, but I believe that for the benefit of everyone involved, such a leap is not only recommended: it’s necessary.