For at least the last year, most of the conversations I have had with our clients about strategy and innovation inevitably gravitate to voice and artificial intelligence (AI). There are, of course, tactical conversations to be had centered around getting data and services in good enough shape to drive these innovations. But beyond this, most clients simply don't know where to begin when it comes to applying these technologies as part of their customer engagement strategy.
At CapTech, we've spent a great deal of time thinking about practical AI and voice applications for consumers, even doing tech challenges internally to experiment with these technologies; I'm going to share some of those learnings here. But first, let's define terms. For the purposes of this blog, we'll think of voice (sometimes referred to as natural language processing) as the interface or "presentation layer" - the way the consumer interacts with the technology. AI is the brains behind it and machine learning is how it gets smart. For now, we're going to get into applications of these concepts for consumers. There are seven patterns we've found helpful for thinking about voice and AI, but we're going to focus on the first three foundational ones here.
The first pattern we use is probably the simplest, transactional applications. These are any simple request for information or action that can be performed using voice. There's really not a ton of AI involved beyond simple service requests. In our recent study of smart speaker users, we saw that simple transactions such as playing music were the most common uses of voice. This pattern might not seem very exciting but I would encourage companies not to neglect these use cases as they are foundational to a robust set of interactions.
Closely related to the transactional pattern is what we call the automated pattern. Think of a set of mundane tasks and interactions that are one-time or repetitious. Lucky for us, computers are really good at doing mundane tasks over and over again. A strong example of a mundane task would be paying your bills. Think of a bill you've received from a doctor or a contractor. Inevitably, you have to go to some hard to use website to enter account and payment information. It's not hard but it's also not what I want to be doing when I get home from work. I just want to take a picture of my bill and have it paid. Think about it. The bill has the URL for the website I need to go to and my account number. The computer knows my personal and payment information. Why couldn't AI go to that URL and enter the information for me? There's actually a whole subdomain around this type of work called robotic process automation that businesses are using to automate backend data entry and paperwork processes. This is something to consider leveraging in the case of repetitive consumer tasks.
This pattern centers around what AI should know about me without me having to explicitly tell it. This touches on important privacy and security issues we can't dive into for this blog. But, suffice it to say, for AI to be helpful there are things it needs to know about me (that hopefully I knowingly decided to share). So, if I told my smart speaker to "order more coffee", it would have to know several things. It would need to know that I like Donut Shop K-Cups. It would have to know my payment information and where I want it sent. What's important for product teams is to think about this kind of inferential information that we can or should know about our customer in order to provide more meaningful help.
If I asked something like, "Am I saving enough for my daughter's college tuition?", what would the skill need to know, what it should know, and what is it fair to assume? These are the kinds of questions product teams need to ask.
Again, these patterns won't explicitly tell product teams how to apply voice or AI to their business but, hopefully, they can help to frame the solution space and, hopefully, get your team on the right path.