_banner.jpg?sfvrsn=29fc1aa2_1)
Transforming Customer Experience with AI at Alorica
By Martha Heller | Reposted from CIO
Mike Clifton, CIO turned CEO of the global customer solutions business, offers advice to CIOs on how to get started with AI.
Alorica, a global leader in customer experience solutions serving Fortune 500 companies, has positioned itself at the forefront of AI innovation in customer service, developing solutions that directly address client needs while elevating agent capabilities. With operations around the world, Alorica leverages AI to transform how businesses connect with their customers. Michael Clifton, who joined as CIO and is now co-CEO, discusses his dual focus on commercial AI products as well as internal productivity, and provides fresh lessons learned on driving value from AI.
At Alorica, you develop AI commercial products for your clients while leveraging AI for internal productivity use cases. What are some examples of your AI products?
Our business is customer experience, whether we deliver that experience by phone, chat, web, and all the other channels our clients offer their customers for a seamless and enjoyable experience. So to maintain our value, we’re always evolving our products. In the context of voice, that means giving our clients’ customers the right information quickly, with a clear dialect, and with increasing levels of automation. We’ve built products like ReVoLT, which translates voice real-time into 75 different languages. And there’s Agent Assist, which listens to a call and drives a query engine that prompts the agent, who no longer needs to memorize product or support information.
Why is the customer service industry such a hotspot for gen AI?
Consulting firms say it is because our productivity is so well measured that when you apply a broad-scale capability like generative AI, you can see the impact and justify more investment. Customer service agents are paid for their time on the phone, so we carefully measure first call resolution and time tracking to SLA management. The fastest path to justifying a new technology is to put it in a highly measured context, because you’ll know the impact right away.
So a measured context is one parameter for AI ROI. But what kind of data do you need for a solid use case?
We used to need structured data because our machine learning models expected field-level information. Today, we don’t care if the data is structured because we can ingest it all, whether images, recordings, documents, PDF files, or large data lakes. What matters is the data is ingestible and has longevity. Think of the data as raw fuel for the models you’re building.
More important than structure is the extent of that data, which is having enough of it to train the model. You train the model based on the data you’re ingesting, which means the data, the raw fuel, is critically important. When you test the extent of the data, you reduce the risk that the data will answer questions incorrectly.
Once companies have ingested the data and tested it to the extent of the questions it can answer, they need to set boundaries. The team at DeepSeek, for example, decided the model shouldn’t answer political questions. Even though they had ingested political data, they defined the model to respond, “I am not equipped to answer.”
Let’s keep building the parameters for a high value gen AI use case. We’ve discussed the extensibility of data. What else?
We look at AI in three ways. The first is using AI operationally. Our BI warehouse that has staffing, finance, and sales data will never die. But by investing the data every hour, I can get more than a static dashboard. I can get an answer to a more precise question, like what are all the hours a customer spent with us in this center, in this country, during this eight-hour window.
The second is mining AI for productivity improvements, whether we apply AI internally or build an AI product by reusing the tasks that our customer service agents do globally across a trillion discussions a day.
The third lens is the agent experience. We’ve built a framework around simulated AI training to make difficult tasks easier for our people. They can get into an AI virtual simulator, talk, and get scored like it’s in real time. That 10-minute AI training class is akin to a three-hour virtual class and is more effective. By using AI in the virtualized world to simulate what could happen in real time, but in a safe harbor, our agents learn faster.
With any new technology, adoption isn’t a one-time hit. The adoption curve of this technology is only at the infantile space. We’re at step one, and as these models become more popular, we’ll see specific industry models, and companies will start publishing their own. You could see a large-scale healthcare company using their own LLM model, and letting their partners and downstream logistics providers use what they’re publishing.
What advice do you have for technology leaders under pressure to respond to AI?
You’ll find more value in AI if you start to democratize it. Some companies do so at the desktop and create workgroup-based capabilities, like for Notebook or Copilot, to build some light models and drive early functionality. But don’t stop there. Now that you’ve given AI tooling power to your employees, coach them to think about two important questions: How do I bring my data together and how do I query this tool? If you instill these two questions throughout your entire organization, you’ll democratize AI faster than your competitors.
Then when you move beyond solving work-group level problems into enterprise level ones, you can replace old integration work by moving data from one system to another with an agentic AI tool without writing an integration. Agentic AI will query the bot, the bot learns, and you create a redundant capability and then a mature capability over time.
Once you’ve democratized AI at the desktop through tools like Claude or CoPilot, and then move up the enterprise, you need to make big decisions about an architecture that supports the enterprise, where you can make some AI investments and stick with them for a while. You can bet on some tools, whether LLMs, data ingestion, or speech-to-text. The basics of that infrastructure alone will give you capabilities right out of the box that you can start piloting before signing up for your first use case and ROI outcome.
We do pilots all over the place, but we also see people build small bots to automate tasks. Agentic AI has taken off, so there’s opportunity there. Now you can sign up for your most promising use case, start automating repetitive tasks, and look at agentic AI to do some really great work.
What new skills will companies need to hire for enterprise gen AI?
Prompt engineers are good at asking the model the right questions, so that’s a skill we’ll need to expand across all our teams. You’ll also need AI collaborators at the department levels. This isn’t a technologist but an enabler who thinks about how to use AI to solve problems. Data scientists will be important, too. They’re masters at organizing and ingesting data, and making sure the model is appropriately tested. But in general, we need people just coming out of school because we know how AI will evolve our businesses over time, and these are the people to help us steward the art of the possible.